Post

Caddy, Docker et Cloudflare

Introduction

Ici, l’idée est de mettre en place un Reverse Proxy (Caddy) prenant en charge automatique Cloudflare avec 2 systèmes de mise en cache possible (Souin et Varnish).

Prérequis

Comme pour la majorité de mes tutoriels, je considère que vous avez un nom de domaine valide et que vous utilisez Cloudflare.

Fichiers requis

docker-compose.yml

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
version: "4.2"

#
# 2022-12-12
# caddy
#

services:

  caddy:
    container_name: caddy
    hostname: caddy
    image: zogg/caddy:latest
    restart: always
    stdin_open: true
    tty: true
    depends_on:
      - varnish
      #- olric
    networks:
      - proxy
    ports:
      - "80:80"
      - "443:443"
    expose:
      - "80"
      - "443"
    environment:
      TZ: "Europe/Paris"
      CF_API_EMAIL: [...]
      CF_DNS_API_TOKEN: "[...]"
      #CF_API_KEY: "[...]"
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/docker/standard/ssl/:/ssl/:ro
      - /opt/docker/standard/notification:/notify:ro
      - /opt/docker/standard/caddy/config/Caddyfile:/etc/caddy/Caddyfile
      - /opt/docker/standard/caddy/config/conf:/etc/caddy/conf
      - /opt/docker/standard/caddy/config/json:/config
      - /opt/docker/standard/caddy/work:/data
      - /mnt/caddy:/mnt/caddy

  varnish:
    container_name: varnish
    hostname: varnish
    image: zogg/varnish:latest
    restart: always
    stdin_open: true
    tty: true
    networks:
      - proxy
    ports:
      - "1080:80"
    command: "-a :1080,PROXY -s default,1G -p thread_pools=16 -p tcp_fastopen=on -p thread_pools=2 -p thread_pool_min=500 -p thread_pool_max=5000"
    environment:
      TZ: "Europe/Paris"
      VARNISH_SIZE: 1G
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/docker/standard/ssl/:/ssl/:ro
      - /opt/docker/standard/notification:/notify:ro
      - /opt/docker/standard/caddy/config/varnish.vcl:/etc/varnish/default.vcl:ro
      - /mnt/varnish:/var/lib/varnish
    tmpfs:
      - /tmp:exec

  #olric:
  #  container_name: olric
  #  hostname: olric
  #  image: olricio/olricd:latest
  #  restart: always
  #  stdin_open: true
  #  tty: true
  #  networks:
  #    - proxy
  #  ports:
  #    - "3320:3320"
  #  environment:
  #    TZ: "Europe/Paris"
  #  volumes:
  #    - /etc/timezone:/etc/timezone:ro
  #    - /etc/localtime:/etc/localtime:ro
  #    - /var/run/docker.sock:/var/run/docker.sock:ro
  #    - /opt/docker/standard/ssl/:/ssl/:ro
  #    - /opt/docker/standard/notification:/notify:ro
  #    - /opt/docker/standard/caddy/config/olric.yml:/etc/olricd.yaml:ro
networks:
  proxy:
    external: true

build.sh (Caddy)

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash
# 2022-12-01

clear
cd "$(dirname "$0")" || exit 1

IMAGE_BASE=zogg/caddy
IMAGE_NAME_LATEST=${IMAGE_BASE}:latest

export DOCKER_CLI_EXPERIMENTAL=enabled
docker run --privileged --rm tonistiigi/binfmt --install all

export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker buildx build --pull \
    --platform=linux/amd64 \
    --output=type=docker \
    --build-arg TZ=Europe/Paris \
    --build-arg CONCURRENCY=$(nproc) \
    -t "${IMAGE_NAME_LATEST}" \
    . 2>&1 | tee build.log

exit 0

Dockerfile (Caddy)

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 2022-12-03

FROM    --platform=linux/amd64 caddy:builder AS builder

ARG     TARGETPLATFORM
ARG     TARGETOS
ARG     TARGETARCH
ARG     BUILDPLATFORM
ARG     BUILDOS
ARG     BUILDARCH
ARG     BUILDVARIANT

CMD     ["bash"]

ENV     LANG                C.UTF-8

RUN xcaddy build latest \
 --with github.com/caddy-dns/cloudflare@latest \
 --with github.com/darkweak/souin/plugins/caddy@latest \
 --with github.com/darkweak/souin@latest

FROM caddy:latest

ENV     LANG                C.UTF-8

LABEL   author              "Olivier Le Bris"
LABEL   maintainer          "zogg"
LABEL   com.centurylinklabs.watchtower.enable=false
LABEL   org.opencontainers.image.source     "https://zogg.fr"
LABEL   org.opencontainers.image.licenses   MIT

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

build.sh (Varnish)

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
# 2022-12-07

clear
cd "$(dirname "$0")" || exit 1

IMAGE_BASE=zogg/varnish
IMAGE_NAME_LATEST=${IMAGE_BASE}:latest

export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker buildx build --pull \
    --platform=linux/amd64 \
    --output=type=docker \
    --build-arg TZ=Europe/Paris \
    --build-arg CONCURRENCY=$(nproc) \
    -t "${IMAGE_NAME_LATEST}" \
    . 2>&1 | tee build.log

exit 0

Dockerfile (Varnish)

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 2022-12-07

FROM    --platform=linux/amd64 varnish:latest

ARG     TARGETPLATFORM
ARG     TARGETOS
ARG     TARGETARCH
ARG     BUILDPLATFORM
ARG     BUILDOS
ARG     BUILDARCH
ARG     BUILDVARIANT

CMD     ["bash"]

ENV     LANG                C.UTF-8

LABEL   author              "Olivier Le Bris"
LABEL   maintainer          "zogg"
LABEL   com.centurylinklabs.watchtower.enable=false
LABEL   org.opencontainers.image.source     "https://zogg.fr"
LABEL   org.opencontainers.image.licenses   MIT

# set the user to root, and install build dependencies
USER    root
RUN     set -e && \
        \
        apt-get update && \
        apt-get -y install $VMOD_DEPS /pkgs/*.deb && \
        \
        # install one, possibly multiple vmods
        install-vmod https://github.com/varnish/varnish-modules/releases/download/0.21.0/varnish-modules-0.21.0.tar.gz && \
        \
        # clean up and set the user back to varnish
        apt-get -y purge --auto-remove $VMOD_DEPS varnish-dev && \
        rm -rf /var/lib/apt/lists/*

USER varnish

Caddyfile

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
# 2022-12-12

(logs) {
	log {
		level error
	}
}
(debug) {
	debug
	log {
		level debug
	}
}

(redis) {
	redis {
		url [ip]:6379
	}
}

(olric) {
	olric {
		url [ip]:3320
	}
}

(souin) {
	allowed_http_verbs GET POST HEAD PATCH

	api {
		souin {
			security
		}
	}

	cdn {
		api_key {env.CF_DNS_API_TOKEN}
		dynamic true
		email {env.CF_API_EMAIL}
		hostname domain.com
		provider cloudflare
		strategy soft
	}

	headers Content-Type Authorization

	key {
		disable_body
		disable_host
		disable_method
	}

	#log_level debug
	log_level error

	import redis
	#import olric

	default_cache_control no-store
}

(cache) {
	order cache before rewrite
	cache {
		import souin
	}
}

(cloudflareTrustedProxies) {
	trusted_proxies 10.0.0.0/8 172.16.0.0/16 192.168.0.0/16 fc00::/7 173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22
}
(cloudflare) {
	tls {
		dns cloudflare {env.CF_DNS_API_TOKEN}
		resolvers 1.1.1.1 1.0.0.1
	}

	header {
		Host {upstream_hostport}

		X-Forwarded-Proto {scheme}
		X-Forwarded-For {host}

		defer
	}
}

(keepalive) {
	transport http {
		resolvers [local ip resolver]
		keepalive_idle_conns 512
		keepalive_idle_conns_per_host 256
	}
}

(reverseProxy) {
	import cloudflareTrustedProxies

	import keepalive
}

(headersGlobal) {
	X-Powered-By "[string override (optionnal)]"

	Host {host}
	X-Real-IP {host}
	X-Forwarded-For {host}

	-Server
	-Via
}

(headersSecurity) {
	Referrer-Policy "strict-origin-when-cross-origin"

	Strict-Transport-Security "max-age=31536000;includeSubDomains;preload"
	X-Permitted-Cross-Domain-Policies: "none"

	X-Content-Type-Options "nosniff"

	X-Frame-Options "SAMEORIGIN"

	X-XSS-Protection 0

	Permissions-Policy "fullscreen=(*),display-capture=(self),accelerometer=(),battery=(),camera=(),autoplay=(self),vibrate=(self),geolocation=(self),midi=(self),notifications=(*),push=(*),microphone=(self),magnetometer=(self),gyroscope=(self),payment=(self)"

	Content-Security-Policy "default-src 'self' 'unsafe-inline' 'unsafe-eval' data: blob: wss: https:"
}

(headersRobots) {
	X-Robots-Tag "none,noarchive,nosnippet,notranslate,noimageindex"
}

(headersCaching) {
	Cache-Control "public,max-age=86400,s-maxage=86400,max-stale=3600,stale-while-revalidate=86400,stale-if-error=86400"
}

(common) {
	encode zstd gzip
	header {
		import headersGlobal
		import headersRobots
		import headersCaching
		import headersSecurity
		defer
	}
}

(pterodadctyl) {
	encode zstd gzip
	header {
		import headersGlobal
		import headersRobots
		Sec-Fetch-Site "cross-site"
		X-Forwarded-Proto "https"
		Access-Control-Allow-Headers "*,Authorization"
		defer
	}
}

{
	import cache

	#import logs
	import debug

	admin off

	acme_dns cloudflare {env.CF_DNS_API_TOKEN}
	email {env.CF_API_EMAIL}
}

import /etc/caddy/conf/entries

varnish.vcl

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
# 2022-12-07
vcl 4.1;

import std;


# --------------------------------------------------
# Backend definitions
# --------------------------------------------------

# No default backend
backend default none;

backend httpdomain {
	.host = "[ip]";
	.port = "[port]";
}


# --------------------------------------------------
# Varnish VCL setup
# --------------------------------------------------

sub vcl_recv {

    # Normalize host header
    set req.http.host = std.tolower(req.http.host);

    # Normalize url
    set req.url = std.tolower(req.url);

    # Remove empty query string parameters
    # e.g.: www.example.com/index.html?
    if (req.url ~ "\?$") {
        set req.url = regsub(req.url, "\?$", "");
    }

    # Remove port number from host header
    set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");

    # Sorts query string parameters alphabetically for cache normalization purposes
    set req.url = std.querysort(req.url);

    # Remove the proxy header to mitigate the httpoxy vulnerability
    # See https://httpoxy.org/
    unset req.http.proxy;

    # Add X-Forwarded-Proto header when using https
    if (!req.http.X-Forwarded-Proto && (std.port(server.ip) == 443)) {
        set req.http.X-Forwarded-Proto = "https";
    }

    # Remove cookies except for these url:
    #  /admin
    #  /ghost
	if (
           !(req.url ~ "^/admin/")
        && !(req.url ~ "^/ghost/")
       ) {
		unset req.http.Cookie;
	}

	# Remove has_js and Google Analytics __* cookies.
	set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", "");

	# Remove a ";" prefix, if present.
	set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");



    #
    # Setup backend depending on 'host' name
    #
	    if (req.http.host ~ "httpdomain.domain.com")          	 { set req.backend_hint = httpdomain;    			}



    # Pipe (direct connection on to the backend) for websocket
    if (req.http.upgrade ~ "(?i)websocket") {
        return (pipe);
    }

    # Non-RFC2616 or CONNECT which is weird
	if (req.method != "GET" &&
		req.method != "HEAD" &&
		req.method != "PUT" &&
		req.method != "POST" &&
		req.method != "TRACE" &&
		req.method != "OPTIONS" &&
		req.method != "DELETE") {
			return (pipe);
	}

    # Don't cache for these url:
    #  /api
    #  /admin
    #  /ghost
    #  /p
    if (req.url ~ "/(api|admin|p|ghost)/") {
           return (pass);
    }

	# Mark static files with the X-Static-File header, and remove any cookies
	if (
        req.url ~ "^[^?]*\.(7z|avi|avif|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|json|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|ogg|ogm|opus|otf|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$"
        ) {
		set req.http.X-Static-File = "true";
		unset req.http.Cookie;
	}

	return (hash);
}

sub vcl_hash {

    # Normalize url
    set req.url = std.tolower(req.url);

    hash_data(req.url);

    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }

    return (lookup);
}

sub vcl_backend_response {

    # Define grace
	set beresp.grace = 2m;
    set beresp.keep = 8m;

	# Here you clean the response headers, removing silly Set-Cookie headers and other mistakes your backend does
	# Inject URL & Host header into the object for asynchronous banning purposes
	set beresp.http.x-url = bereq.url;
	set beresp.http.x-host = bereq.http.host;

    # Default TTL
	set beresp.ttl = 60s;

    if (bereq.url ~ "^/static/") {
        set beresp.ttl = 1d;
    }

	# Keep the response in cache for 4 hours if the response has validating headers
	if (beresp.http.ETag || beresp.http.Last-Modified) {
		set beresp.keep = 4h;
	}

    # Allow GZIP compression on all JavaScript/CSS files and all text-based content
    # Allow caching extension
    if (beresp.http.content-type ~
        "text/plain|text/css|application/json|application/x-javascript|text/xml|application/xml|application/xml+rss|text/javascript"
        ) {
        set beresp.do_gzip = true;
        set beresp.http.cache-control = "public, max-age=1209600";
    }

    # Remove the Set-Cookie header for cacheable content
    # Only for HTTP GET & HTTP HEAD requests
	if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
		unset beresp.http.set-cookie;
	}

    # Don't cache content with a negative TTL
    # Don't cache content for no-cache or no-store content
    # Don't cache content where all headers are varied
	if (    beresp.ttl <= 0s
		||  beresp.http.Surrogate-control ~ "no-store"
        ||  (!beresp.http.Surrogate-Control && beresp.http.Cache-Control ~ "no-cache|no-store")
        ||	beresp.http.Vary == "*")
       {
			# Mark as Hit-For-Pass for the next 2 minutes
			set beresp.ttl = 120s;
			set beresp.uncacheable = true;
	}

    # Cache only successfully responses
	if (
           beresp.status != 200
        && beresp.status != 410
        && beresp.status != 301
        && beresp.status != 302
        && beresp.status != 304
        && beresp.status != 307
        ) {
		set beresp.http.X-Cacheable = "NO:UNCACHEABLE";
		set beresp.ttl = 10s;
		set beresp.uncacheable = true;
	}
	else {
		# If we dont get a Cache-Control header from the backend we default to cache for all objects
		if (!beresp.http.Cache-Control) {
			set beresp.ttl = 1h;
			set beresp.http.X-Cacheable = "YES:FORCED";
		}

		# If the file is marked as static we cache it
		if (bereq.http.X-Static-File == "true") {
			unset beresp.http.Set-Cookie;
			set beresp.http.X-Cacheable = "YES:FORCED:STATIC";
			set beresp.ttl = 1h;
		}

		if (beresp.http.Set-Cookie) {
			set beresp.http.X-Cacheable = "NO:GOTCOOKIES";
		}
        elseif (beresp.http.Cache-Control ~ "private") {

			if (beresp.http.Cache-Control ~ "public" && bereq.http.X-Static-File == "true" ) {
                set beresp.http.Cache-Control = regsub(beresp.http.Cache-Control, "private,", "");
                set beresp.http.Cache-Control = regsub(beresp.http.Cache-Control, "private", "");
                set beresp.http.X-Cacheable = "YES";
			}
			elseif (bereq.http.X-Static-File == "true" && (beresp.http.Content-type ~ "image\/webp" || beresp.http.Content-type ~ "image\/avif") )
			{
                set beresp.http.Cache-Control = regsub(beresp.http.Cache-Control, "private,", "");
                set beresp.http.Cache-Control = regsub(beresp.http.Cache-Control, "private", "");
                set beresp.http.X-Cacheable = "YES";
			}
			else {
                set beresp.http.X-Cacheable = "NO:CACHE-CONTROL=PRIVATE";
			}
		}
	}

    return (deliver);
}

sub vcl_hit {

	if (obj.ttl >= 0s) {
		return (deliver);
	}

	if (std.healthy(req.backend_hint)) {
		if (obj.ttl + 300s > 0s) {
			# Hit after TTL expiration, but within grace period
			set req.http.grace = "normal (healthy server)";
			return (deliver);
		} else {
			# Hit after TTL and grace expiration
			return (restart);
		}
	} else {
		# Server is not healthy, retrieve from cache
		set req.http.grace = "unlimited (unhealthy server)";
		return (deliver);
	}

	return (restart);
}

sub vcl_deliver {

	# Debug header
	if (req.http.X-Cacheable) {
		set resp.http.X-Cacheable = req.http.X-Cacheable;
	}
    elseif (obj.uncacheable) {
		if (!resp.http.X-Cacheable) {
			set resp.http.X-Cacheable = "NO:UNCACHEABLE";
		}
	}
    elseif (!resp.http.X-Cacheable) {
		set resp.http.X-Cacheable = "YES";
	}
	# End Debug Header

	if (resp.http.X-Varnish ~ "[0-9]+ +[0-9]+") {
		set resp.http.X-Cache = "HIT";
	  } else {
		set resp.http.X-Cache = "MISS";
    }
    set resp.http.X-Cache-Hits = obj.hits;

	return (deliver);
}

sub vcl_pipe {

    if (req.http.upgrade) {
        set bereq.http.upgrade = req.http.upgrade;
        set bereq.http.connection = req.http.connection;
    }
}

olric.yml

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
olricd:
  # BindAddr denotes the address that Olric will bind to for communication
  # with other Olric nodes.
  bindAddr: 0.0.0.0

  # BindPort denotes the address that Olric will bind to for communication
  # with other Olric nodes.
  bindPort: 3320

  # KeepAlivePeriod denotes whether the operating system should send
  # keep-alive messages on the connection.
  keepAlivePeriod: 300s

  # IdleClose will automatically close idle connections after the specified duration.
  # Use zero to disable this feature.
  # idleClose: 300s

  # Timeout for bootstrap control
  #
  # An Olric node checks operation status before taking any action for the
  # cluster events, responding incoming requests and running API functions.
  # Bootstrapping status is one of the most important checkpoints for an
  # "operable" Olric node. BootstrapTimeout sets a deadline to check
  # bootstrapping status without blocking indefinitely.
  bootstrapTimeout: 5s

  # PartitionCount is 271, by default.
  partitionCount: 271

  # ReplicaCount is 1, by default.
  replicaCount: 1

  # Minimum number of successful writes to return a response for a write request.
  writeQuorum: 1

  # Minimum number of successful reads to return a response for a read request.
  readQuorum: 1

  # Switch to control read-repair algorithm which helps to reduce entropy.
  readRepair: false

  # Default value is SyncReplicationMode.
  #replicationMode: 0 # sync mode. for async, set 1
  replicationMode: 1

  # Minimum number of members to form a cluster and run any query on the cluster.
  memberCountQuorum: 1

  # Coordinator member pushes the routing table to cluster members in the case of
  # node join or left events. It also pushes the table periodically. routingTablePushInterval
  # is the interval between subsequent calls. Default is 1 minute.
  routingTablePushInterval: 1m

  # Olric can send push cluster events to cluster.events channel. Available cluster events:
  #
  # * node-join-event
  # * node-left-event
  # * fragment-migration-event
  # * fragment-received-event
  #
  # If you want to receive these events, set true to EnableClusterEventsChannel and subscribe to
  # cluster.events channel. Default is false.
  enableClusterEventsChannel: true

client:
  # Timeout for TCP dial.
  #
  # The timeout includes name resolution, if required. When using TCP, and the host in the address parameter
  # resolves to multiple IP addresses, the timeout is spread over each consecutive dial, such that each is
  # given an appropriate fraction of the time to connect.
  dialTimeout: 5s

  # Timeout for socket reads. If reached, commands will fail
  # with a timeout instead of blocking. Use value -1 for no timeout and 0 for default.
  # Default is DefaultReadTimeout
  readTimeout: 3s

  # Timeout for socket writes. If reached, commands will fail
  # with a timeout instead of blocking.
  # Default is DefaultWriteTimeout
  writeTimeout: 3s

  # Maximum number of retries before giving up.
  # Default is 3 retries; -1 (not 0) disables retries.
  #maxRetries: 3

  # Minimum backoff between each retry.
  # Default is 8 milliseconds; -1 disables backoff.
  #minRetryBackoff: 8ms

  # Maximum backoff between each retry.
  # Default is 512 milliseconds; -1 disables backoff.
  #maxRetryBackoff: 512ms

  # Type of connection pool.
  # true for FIFO pool, false for LIFO pool.
  # Note that fifo has higher overhead compared to lifo.
  #poolFIFO: false

  # Maximum number of socket connections.
  # Default is 10 connections per every available CPU as reported by runtime.GOMAXPROCS.
  #poolSize: 0

  # Minimum number of idle connections which is useful when establishing
  # new connection is slow.
  #minIdleConns:
  minIdleConns: 16

  # Connection age at which client retires (closes) the connection.
  # Default is to not close aged connections.
  #maxConnAge:
  maxConnAge: 1m

  # Amount of time client waits for connection if all connections are busy before
  # returning an error. Default is ReadTimeout + 1 second.
  #poolTimeout: 3s

  # Amount of time after which client closes idle connections.
  # Should be less than server's timeout.
  # Default is 5 minutes. -1 disables idle timeout check.
  idleTimeout: 5m

  # Frequency of idle checks made by idle connections reaper.
  # Default is 1 minute. -1 disables idle connections reaper,
  # but idle connections are still discarded by the client
  # if IdleTimeout is set.
  idleCheckFrequency: 1m


logging:
  # DefaultLogVerbosity denotes default log verbosity level.
  #
  # * 1 - Generally useful for this to ALWAYS be visible to an operator
  #   * Programmer errors
  #   * Logging extra info about a panic
  #   * CLI argument handling
  # * 2 - A reasonable default log level if you don't want verbosity.
  #   * Information about config (listening on X, watching Y)
  #   * Errors that repeat frequently that relate to conditions that can be
  #     corrected
  # * 3 - Useful steady state information about the service and
  #     important log messages that may correlate to
  #   significant changes in the system.  This is the recommended default log
  #     level for most systems.
  #   * Logging HTTP requests and their exit code
  #   * System state changing
  #   * Controller state change events
  #   * Scheduler log messages
  # * 4 - Extended information about changes
  #   * More info about system state changes
  # * 5 - Debug level verbosity
  #   * Logging in particularly thorny parts of code where you may want to come
  #     back later and check it
  # * 6 - Trace level verbosity
  #   * Context to understand the steps leading up to neterrors and warnings
  #   * More information for troubleshooting reported issues
  #verbosity: 3
  verbosity: 0

  # Default LogLevel is DEBUG. Available levels: "DEBUG", "WARN", "ERROR", "INFO"
  #level: INFO
  level: ERROR
  output: stderr

memberlist:
  environment: lan

  # Configuration related to what address to bind to and ports to
  # listen on. The port is used for both UDP and TCP gossip. It is
  # assumed other nodes are running on this port, but they do not need
  # to.
  bindAddr: 0.0.0.0
  bindPort: 3322

  # EnableCompression is used to control message compression. This can
  # be used to reduce bandwidth usage at the cost of slightly more CPU
  # utilization. This is only available starting at protocol version 1.
  enableCompression: false
  #enableCompression: true

  # JoinRetryInterval is the time gap between attempts to join an existing
  # cluster.
  joinRetryInterval: 1ms

  # MaxJoinAttempts denotes the maximum number of attemps to join an existing
  # cluster before forming a new one.
  maxJoinAttempts: 1

  # See service discovery plugins
  #peers:
  #  - "localhost:3325"

  #advertiseAddr: ""
  #advertisePort: 3322
  #suspicionMaxTimeoutMult: 6
  #disableTCPPings: false
  #awarenessMaxMultiplier: 8
  #gossipNodes: 3
  #gossipVerifyIncoming: true
  #gossipVerifyOutgoing: true
  #dnsConfigPath: "/etc/resolv.conf"
  #handoffQueueDepth: 1024
  #udpBufferSize: 1400

dmaps:
  engine:
    name: kvstore
    config:
      #tableSize: 524288 # bytes
      tableSize: 1048576 # bytes
#  checkEmptyFragmentsInterval: 1m
#  triggerCompactionInterval: 10m
#  numEvictionWorkers: 1
#  maxIdleDuration: ""
#  ttlDuration: "100s"
#  maxKeys: 100000
#  maxInuse: 1000000
#  lRUSamples: 10
#  evictionPolicy: "LRU"
#  custom:
#   foobar:
#      maxIdleDuration: "60s"
#      ttlDuration: "300s"
#      maxKeys: 500000
#      lRUSamples: 20
#      evictionPolicy: "NONE"

entries

[Fichier]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 2022-12-12

@httpsdomain host httpsdomain.domain.com
handle @httpsdomain {
	import common
	reverse_proxy https://[ip]:[port] {
		import reverseProxy
		transport http {
			tls
			tls_insecure_skip_verify #allow self-signed certificates
		}
	}
}

@httpdomain host httpdomain.domain.com
handle @httpdomain {
	import common
	#cache #uncomment to enable Souin caching
	reverse_proxy [ip]:[port] {
		import reverseProxy
	}
    # use this is prefer Varnish caching
	#reverse_proxy [varnish_ip]:[varnish_port] {
	#	import reverseProxy
	#}
}

@httpdomainwithpath host httpdomainwithpath.domain.com
handle @httpdomainwithpath {
	import common

	handle_path /thepath {
		reverse_proxy [ip]:[port1] {
			import reverseProxy
		}
	}

	handle {
		reverse_proxy [ip]:[port2] {
			import reverseProxy
		}
	}
}

Mise en place

Je pars du principe que vous êtes sous Portainer et sous Linux.

Vous aurez à créer (ou adapter au besoin) les répertoires suivants:

  • /opt/docker/standard/caddy/config
  • /opt/docker/standard/caddy/work

Et vous placez les fichiers Caddyfile et entries dans le dossier config.

Dans le fichier docker-compose.yml que vous allez utiliser sous Portainer (dans un stack c’est plus pratique), il faudra penser à modifier les variables d’environnement CF_* en fonction de vos clés Cloudflare.

Pensez aussi à ajuster dans le fichier entries les domaines et/ou sous-domaines qui devront être pris en charge par Caddy.

Création des images Docker

Il vous suffit d’utiliser le script build.sh présenté plus haut comme indiqué ci-dessous pour générer l’image :

1
sudo bash build.sh

Précisions

Avec cette méthode, vous disposerez des fonctionnalités suivantes :

  • Équilibrage distribué de charge (si vous utilisez le load-balancing de Caddy)
  • SSL (HTTPS) automatique
  • Certificats SSL génériques (wildcard certificates)
  • Compatibilité HTTP/3
  • Protection CORS
  • Système de cache intégré à Caddy: Souin
  • Système de cache externe à Caddy: Varnish

Conclusion

Ce tutoriel vous donne une bonne base pour commencer dans le monde de Caddy Cache Server.

Dans le cadre d’un HomeLab, il permet de mettre des services web à disposition, tout en disposant d’une bonne sécurité de base (ssl/https, gestion de la charge).

A vous de l’adapter en fonction de vos besoin !

Changelog

2022-12-12

  • Mise à jour du compose (avec Olric en backend de cache pour Souin en exemple)
  • Corrections du Caddyfile et ajout de Souin
  • Correction des entries pour la gestion des sous-domaines
  • Ajout de la configuration d’Olric et de Souin
Cet article est sous licence CC BY 4.0 par l'auteur.

© 2022- Olivier. Certains droits réservés.

Propulsé par τζ avec le thème Χ