varnishd¶
HTTP accelerator daemon¶
Manual section: | 1 |
---|
SYNOPSIS¶
varnishd [-a [name=][address][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]] [-b [host[:port]|path]] [-C] [-d] [-F] [-f config] [-h type[,options]] [-I clifile] [-i identity] [-j jail[,jailoptions]] [-l vsl] [-M address:port] [-n name] [-P file] [-p param=value] [-r param[,param…]] [-S secret-file] [-s [name=]kind[,options]] [-T address[:port]] [-t TTL] [-V] [-W waiter] [-z id=certfile]
varnishd [-x parameter|vsl|cli|builtin|optstring]
varnishd [-?]
DESCRIPTION¶
The varnishd daemon accepts HTTP requests from clients, passes them on to a backend server and caches the returned documents to better satisfy future requests for the same document.
OPTIONS¶
Basic options¶
-a <[name=][address][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]>
Listen for client requests on the specified address and port. The address can be a host name (“localhost”), an IPv4 dotted-quad (“127.0.0.1”), an IPv6 address enclosed in square brackets (“[::1]”), or a path beginning with a ‘/’ for a Unix domain socket (“/path/to/listen.sock”). If address is not specified, varnishd will listen on all available IPv4 and IPv6 interfaces. The port can be a port number (80), a service name (http), or a port range (80-81). Port ranges are inclusive and cannot overlap. If port is not specified, port 80 (http) is used. At least one of address or port is required.
If a Unix domain socket is specified as the listen address, then the user, group and mode sub-arguments may be used to specify the permissions of the socket file – use names for user and group, and a 3-digit octal value for mode. These sub-arguments are not permitted if an IP address is specified. When Unix domain socket listeners are in use, all VCL configurations must have version >= 4.1.
Name is referenced in logs. If name is not specified, “a0”, “a1”, etc. is used. An additional protocol type can be set for the listening socket with PROTO. Valid protocol types are: HTTP (default), and PROXY.
Multiple listening addresses can be specified by using different -a arguments.
-A cfgfile | Configuration file for TLS listen endpoints. See varnish-tls-conf for more details. This option is only available on platforms with OpenSSL 1.1 or newer. |
-b <[host[:port]|path]> | |
Use the specified host as backend server. If port is not specified, the default is 8080. If the value of -b can be used only once, and not together with f. | |
-f config | Use the specified VCL configuration file instead of the builtin default. See VCL for details on VCL syntax. If a single -f option is used, then the VCL instance loaded from the file is named “boot” and immediately becomes active. If more than one -f option is used, the VCL instances are named “boot0”, “boot1” and so forth, in the order corresponding to the -f arguments, and the last one is named “boot”, which becomes active. Either -b or one or more -f options must be specified, but not both, and they cannot both be left out, unless -d is used to start varnishd in debugging mode. If the empty string is specified as the sole -f option, then varnishd starts without starting the worker process, and the management process will accept CLI commands. You can also combine an empty -f option with an initialization script (-I option) and the child process will be started if there is an active VCL at the end of the initialization. When used with a relative file name, config is searched in the
|
-n name | Specify the name for this instance. This name is used to construct the name of the directory in which varnishd keeps temporary files and persistent state. If the specified name begins with a forward slash, it is interpreted as the absolute path to the directory. |
-z id=certfile
Backend TLS client certificate. The ID can be used when specifying a client certificate for use for a TLS-enabled backend.
certfile specifies the path to an X509 certificate PEM file, containing a private key and a certificate, and optionally any intermediate certificate if applicable.
This option can be specified multiple times to load multiple certificates.
Documentation options¶
For these options, varnishd prints information to standard output and exits. When a -x option is used, it must be the only option (it outputs documentation in reStructuredText, aka RST).
-?
Print the usage message.
-x parameter | Print documentation of the runtime parameters (-p options), see List of Parameters. |
-x vsl | Print documentation of the tags used in the Varnish shared memory log, see VSL. |
-x cli | Print documentation of the command line interface, see varnish-cli. |
-x builtin | Print the contents of the default VCL program builtin.vcl . |
-x optstring | Print the optstring parameter to getopt(3) to help writing
wrapper scripts. |
Operations options¶
-F | Do not fork, run in the foreground. Only one of -F or -d can be specified, and -F cannot be used together with -C. |
-T <address[:port]> | |
Offer a management interface on the specified address and port. See
varnish-cli for documentation of the management commands.
To disable the management interface use none . | |
-M <address:port> | |
Connect to this port and offer the command line interface. Think of it as a reverse shell. When running with -M and there is no backend defined the child process (the cache) will not start initially. | |
-P file | Write the PID of the process to the specified file. |
-i identity | Specify the identity of the Varnish server. This can be accessed
using server.identity from VCL and with VSM_Name() from
utilities. If not specified the output of gethostname(3) is used. |
-I clifile | Execute the management commands in the file given as clifile
before the the worker process starts, see CLI Command File. |
Tuning options¶
-t TTL | Specifies the default time to live (TTL) for cached objects. This is a shortcut for specifying the default_ttl run-time parameter. |
-p <param=value> | |
Set the parameter specified by param to the specified value, see List of Parameters for details. This option can be used multiple times to specify multiple parameters. | |
-s <[name=]type[,options]> | |
Use the specified storage backend. See Storage Backend section. This option can be used multiple times to specify multiple storage files. Name is referenced in logs, VCL, statistics, etc. If name is not specified, “s0”, “s1” and so forth is used. | |
-l <vsl> | Specifies size of the space for the VSL records, shorthand for
-p vsl_space=<vsl> . Scaling suffixes like ‘K’ and ‘M’ can be
used up to (G)igabytes. See vsl_space for more information. |
Security options¶
-r <param[,param…]> | |
Make the listed parameters read only. This gives the system administrator a way to limit what the Varnish CLI can do. Consider making parameters such as cc_command, vcc_allow_inline_c and vmod_path read only as these can potentially be used to escalate privileges from the CLI. | |
-S secret-file | Path to a file containing a secret used for authorizing access to
the management port. To disable authentication use If this argument is not provided, a secret drawn from the system
PRNG will be written to a file called Thus, users wishing to delegate control over varnish will probably want to create a custom secret file with appropriate permissions (ie. readable by a unix group to delegate control to). |
-j <jail[,jailoptions]> | |
Specify the jailing mechanism to use. See Jail section. |
Advanced, development and debugging options¶
-d | Enables debugging mode: The parent process runs in the foreground with a CLI connection on stdin/stdout, and the child process must be started explicitly with a CLI command. Terminating the parent process will also terminate the child. Only one of -d or -F can be specified, and -d cannot be used together with -C. |
-C | Print VCL code compiled to C language and exit. Specify the VCL file to compile with the -f option. Either -f or -b must be used with -C, and -C cannot be used with -F or -d. |
-V | Display the version number and exit. This must be the only option. |
-h <type[,options]> | |
Specifies the hash algorithm. See Hash Algorithm section for a list of supported algorithms. | |
-W waiter | Specifies the waiter type to use. |
Hash Algorithm¶
The following hash algorithms are available:
-h critbit | self-scaling tree structure. The default hash algorithm in Varnish Cache 2.1 and onwards. In comparison to a more traditional B tree the critbit tree is almost completely lockless. Do not change this unless you are certain what you’re doing. |
-h simple_list | A simple doubly-linked list. Not recommended for production use. |
-h <classic[,buckets]> | |
A standard hash table. The hash key is the CRC32 of the object’s URL modulo the size of the hash table. Each table entry points to a list of elements which share the same hash key. The buckets parameter specifies the number of entries in the hash table. The default is 16383. |
Storage Backend¶
The following storage types are available:
-s <default[,size]> | |
The default storage type resolves to umem where available and malloc otherwise. | |
-s <malloc[,size]> | |
malloc is a memory based backend. | |
-s <umem[,size]> | |
umem is a storage backend which is more efficient than malloc on platforms where it is available. See the section on umem in chapter Storage backends of The Varnish Users Guide for details. | |
-s <file,path[,size[,granularity[,advice]]]> | |
The file backend stores data in a file on disk. The file will be accessed using mmap. Note that this storage provide no cache persistence. The path is mandatory. If path points to a directory, a temporary file will be created in that directory and immediately unlinked. If path points to a non-existing file, the file will be created. If size is omitted, and path points to an existing file with a size greater than zero, the size of that file will be used. If not, an error is reported. Granularity sets the allocation block size. Defaults to the system page size or the filesystem block size, whichever is larger. Advice tells the kernel how varnishd expects to use this mapped
region so that the kernel can choose the appropriate read-ahead
and caching techniques. Possible values are |
-s mse[,<path-to-config>]
This configures a stevedore using the Massive Storage Engine. It takes a single optional argument to a configuration file that further defines the storage. See the varnish-mse manpage for more information.
-s mse4[,<path-to-config>]
This configures Varnish Enterprise to use the Massive Storage Engine version 4 (MSE4) as a stevedore. It takes a single optional argument to a configuration file that further configures the stevedore.
If no configuration file is specified, a simple setup using all defaults is configured. No disk backing is enabled.
Note that using the MSE4 stevedore is mutually exclusive to all of the other stevedores supported in Varnish Enterprise. This means that only a single -s argument using mse4 will be accepted, and giving any further -s arguments to varnishd will cause an error.
See the mse4 manpage for more information about using and configuring MSE4.
You can also prefix the type with NAME=
to explicitly name a storage:
-s myStorage=malloc,5G
This allows to address it more easily in VCL:
set beresp.storage = storage.myStorage;
If the name is omitted, Varnish will name storages sN
, starting with s0
and incrementing N for every new storage.
Jail¶
Varnish jails are a generalization over various platform specific methods to reduce the privileges of varnish processes. They may have specific options. Available jails are:
-j solaris | Reduce privileges(5) for varnishd and sub-process to the minimally required set. Only available on platforms which have the setppriv(2) call. |
-j <linux[,user=`user`][,ccgroup=`group`][,workuser=`user`]> | |
Default on Linux platforms, it overloads the UNIX jail with Linux-specific mechanisms. | |
-j <unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]> | |
Default on all other platforms when varnishd is started with an effective uid of 0 (“as root”). With the The optional user argument specifies which alternative user to
use. It defaults to The optional ccgroup argument specifies a group to add to varnish subprocesses requiring access to a c-compiler. There is no default. The optional workuser argument specifies an alternative user to use
for the worker process. It defaults to | |
-j none | last resort jail choice: With jail mechanism none , varnish will
run all processes with the privileges it was started with. |
Management Interface¶
If the -T option was specified, varnishd will offer a command-line management interface on the specified address and port. The recommended way of connecting to the command-line management interface is through varnishadm.
The commands available are documented in varnish-cli.
CLI Command File¶
The -I option makes it possible to run arbitrary management commands when varnishd is launched, before the worker process is started. In particular, this is the way to load configurations, apply labels to them, and make a VCL instance active that uses those labels on startup:
vcl.load panic /etc/varnish_panic.vcl
vcl.load siteA0 /etc/varnish_siteA.vcl
vcl.load siteB0 /etc/varnish_siteB.vcl
vcl.load siteC0 /etc/varnish_siteC.vcl
vcl.label siteA siteA0
vcl.label siteB siteB0
vcl.label siteC siteC0
vcl.load main /etc/varnish_main.vcl
vcl.use main
Every line in the file, including the last line, must be terminated by a newline or carriage return.
If a command in the file is prefixed with ‘-‘, failure will not abort the startup.
RUN TIME PARAMETERS¶
Run Time Parameter Flags¶
Runtime parameters are marked with shorthand flags to avoid repeating the same text over and over in the table below. The meaning of the flags are:
experimental
We have no solid information about good/bad/optimal values for this parameter. Feedback with experience and observations are most welcome.
delayed
This parameter can be changed on the fly, but will not take effect immediately.
restart
The worker process must be stopped and restarted, before this parameter takes effect.
reload
The VCL programs must be reloaded for this parameter to take effect.
experimental
We’re not really sure about this parameter, tell us what you find.
wizard
Do not touch unless you really know what you’re doing.
only_root
Only works if varnishd is running as root.
Default Value Exceptions on 32 bit Systems¶
Be aware that on 32 bit systems, certain default values are reduced relative to the values listed below, in order to conserve VM space:
- workspace_client: 16k
- http_resp_size: 8k
- http_req_size: 12k
- gzip_stack_buffer: 4k
- thread_pool_stack: 64k
List of Parameters¶
This text is produced from the same text you will find in the CLI if you use the param.show command:
accept_filter¶
NB: This parameter depends on a feature which is not available on all platforms.
- Units: bool
- Default: off
- Flags:
Enable kernel accept-filters.
acceptor_sleep_decay¶
- Default: 0.9
- Minimum: 0
- Maximum: 1
- Flags: experimental
If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter (multiplicatively) reduce the sleep duration for each successful accept. (ie: 0.9 = reduce by 10%)
acceptor_sleep_incr¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
- Maximum: 1.000
- Flags: experimental
If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter control how much longer we sleep, each time we fail to accept a new connection.
acceptor_sleep_max¶
- Units: seconds
- Default: 0.050
- Minimum: 0.000
- Maximum: 10.000
- Flags: experimental
If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter limits how long it can sleep between attempts to accept new connections.
backend_cooloff¶
- Units: seconds
- Default: 60.000
- Minimum: 60.000
- Flags: experimental
How long we wait before cleaning up deleted backends.
backend_idle_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 1.000
Timeout before we close unused backend connections.
backend_local_error_holddown¶
- Units: seconds
- Default: 10.000
- Minimum: 0.000
- Flags: experimental
When connecting to backends, certain error codes (EADDRNOTAVAIL, EACCESS, EPERM) signal a local resource shortage or configuration issue for which retrying connection attempts may worsen the situation due to the complexity of the operations involved in the kernel. This parameter prevents repeated connection attempts for the configured duration.
backend_remote_error_holddown¶
- Units: seconds
- Default: 0.250
- Minimum: 0.000
- Flags: experimental
When connecting to backends, certain error codes (ECONNREFUSED, ENETUNREACH) signal fundamental connection issues such as the backend not accepting connections or routing problems for which repeated connection attempts are considered useless This parameter prevents repeated connection attempts for the configured duration.
backend_wait_limit¶
- Default: 0
- Minimum: 0
- Flags: experimental
Maximum number of transactions that can queue waiting for a backend connection to become avaiable. This default of 0 (zero) means that there is no transaction queueing. VCL can override this default value for each backend.
backend_wait_timeout¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
- Flags: experimental
When a backend has no connections available for a transaction, the transaction can be queued (see backend_wait_limit) to wait for a connection. This is the default time that the transaction will wait before giving up. VCL can override this default value for each backend.
ban_cutoff¶
- Units: bans
- Default: 0
- Minimum: 0
- Flags: experimental
Expurge long tail content from the cache to keep the number of bans below this value. 0 disables.
When this parameter is set to a non-zero value, the ban lurker continues to work the ban list as usual top to bottom, but when it reaches the ban_cutoff-th ban, it treats all objects as if they matched a ban and expurges them from cache. As actively used objects get tested against the ban list at request time and thus are likely to be associated with bans near the top of the ban list, with ban_cutoff, least recently accessed objects (the “long tail”) are removed.
This parameter is a safety net to avoid bad response times due to bans being tested at lookup time. Setting a cutoff trades response time for cache efficiency. The recommended value is proportional to rate(bans_lurker_tests_tested) / n_objects while the ban lurker is working, which is the number of bans the system can sustain. The additional latency due to request ban testing is in the order of ban_cutoff / rate(bans_lurker_tests_tested). For example, for rate(bans_lurker_tests_tested) = 2M/s and a tolerable latency of 100ms, a good value for ban_cutoff may be 200K.
ban_dups¶
- Units: bool
- Default: on
Eliminate older identical bans when a new ban is added. This saves CPU cycles by not comparing objects to identical bans. This is a waste of time if you have many bans which are never identical.
ban_lurker_age¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
The ban lurker will ignore bans until they are this old. When a ban is added, the active traffic will be tested against it as part of object lookup. Because many applications issue bans in bursts, this parameter holds the ban-lurker off until the rush is over. This should be set to the approximate time which a ban-burst takes.
ban_lurker_batch¶
- Default: 1000
- Minimum: 1
The ban lurker sleeps ${ban_lurker_sleep} after examining this many objects. Use this to pace the ban-lurker if it eats too many resources.
ban_lurker_holdoff¶
- Units: seconds
- Default: 0.010
- Minimum: 0.000
- Flags: experimental
How long the ban lurker sleeps when giving way to lookup due to lock contention.
ban_lurker_sleep¶
- Units: seconds
- Default: 0.010
- Minimum: 0.000
How long the ban lurker sleeps after examining ${ban_lurker_batch} objects. Use this to pace the ban-lurker if it eats too many resources. A value of zero will disable the ban lurker entirely.
between_bytes_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
We only wait for this many seconds between bytes received from the backend before giving up the fetch. A value of zero means never give up. VCL values, per backend or per backend request take precedence. This parameter does not apply to pipe’ed requests.
cc_command¶
- Default: exec gcc -g -O2 -Wall -Werror -Wno-error=unused-result t-Werror t-Wall t-Wno-format-y2k t-W t-Wstrict-prototypes t-Wmissing-prototypes t-Wpointer-arith t-Wreturn-type t-Wcast-qual t-Wwrite-strings t-Wswitch t-Wshadow t-Wunused-parameter t-Wcast-align t-Wchar-subscripts t-Wnested-externs t-Wextra t-Wno-sign-compare -fstack-protector -Wno-missing-field-initializers -pthread -fpic -shared -Wl,-x -o %o %s
- Flags: must_reload
Command used for compiling the C source code to a dlopen(3) loadable object. Any occurrence of %s in the string will be replaced with the source file name, and %o will be replaced with the output file name.
cli_limit¶
- Units: bytes
- Default: 128k
- Minimum: 128b
- Maximum: 99999999b
Maximum size of CLI response. If the response exceeds this limit, the response code will be 201 instead of 200 and the last line will indicate the truncation.
cli_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
Timeout for the childs replies to CLI requests from the mgt_param.
clock_skew¶
- Units: seconds
- Default: 10
- Minimum: 0
How much clockskew we are willing to accept between the backend and our own clock.
clock_step¶
- Units: seconds
- Default: 1.000
- Minimum: 0.000
How much observed clock step we are willing to accept before we panic.
connect_timeout¶
- Units: seconds
- Default: 3.500
- Minimum: 0.000
Default connection timeout for backend connections. We only try to connect to the backend for this many seconds before giving up. VCL can override this default value for each backend and backend request.
critbit_cooloff¶
- Units: seconds
- Default: 180.000
- Minimum: 60.000
- Maximum: 254.000
- Flags: wizard
How long the critbit hasher keeps deleted objheads on the cooloff list.
crypto_buffer¶
- Units: bytes
- Default: 32k
- Minimum: 2k
- Flags: experimental
Size of crypto buffer used for Total Encryption processing. These buffers are used for passing data to and from the kernel. If the buffers are too large, the kernel may block.
debug¶
- Default: none
Enable/Disable various kinds of debugging.
- none
- Disable all debugging
Use +/- prefix to set/reset individual bits:
- req_state
- VSL Request state engine
- workspace
- VSL Workspace operations
- waiter
- VSL Waiter internals
- waitinglist
- VSL Waitinglist events
- syncvsl
- Make VSL synchronous
- hashedge
- Edge cases in Hash
- vclrel
- Rapid VCL release
- lurker
- VSL Ban lurker
- esi_chop
- Chop ESI fetch to bits
- flush_head
- Flush after http1 head
- vtc_mode
- Varnishtest Mode
- witness
- Emit WITNESS lock records
- vsm_keep
- Keep the VSM file on restart
- slow_acceptor
- Slow down Acceptor
- h2_nocheck
- Disable various H2 checks
- vmod_so_keep
- Keep copied VMOD libraries
- processors
- Fetch/Deliver processors
- protocol
- Protocol debugging
- probe
- VSL health probe events
- cli
- CLI debug log to syslog
- slow_start
- Add 3 seconds to CLI start
- failresched
- Fail from waiting list
- vcl_keep
- Keep VCL C and so files
- delay_deliver
- Wait 3 seconds before deliver
- delay_objiter
- Wait 3 seconds before objiter
- delay_poke
- Wait 3 seconds before poke
- single_segment
- Disable segment prefetch
- delay_vcltemp
- Wait 1 second before templock
- libadns_discard
- Wait 3 seconds during dscard
- libadns_warm
- Wait 3 seconds during warm
- cli_show_sensitive
- Log sensitive commands to syslog and VSL
- startup_panic
- Panic early during cache startup
- vcc_lenient_restrict
- Turn $Restrict violations into warnings
- delay_fetch
- Wait 3 seconds in fetch task before start
default_grace¶
- Units: seconds
- Default: 10.000
- Minimum: 0.000
- Flags: obj_sticky
Default grace period. We will deliver an object this long after it has expired, provided another thread is attempting to get a new copy.
default_keep¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
- Flags: obj_sticky
Default keep period. We will keep a useless object around this long, making it available for conditional backend fetches. That means that the object will be removed from the cache at the end of ttl+grace+keep.
default_ttl¶
- Units: seconds
- Default: 120.000
- Minimum: 0.000
- Flags: obj_sticky
The TTL assigned to objects if neither the backend nor the VCL code assigns one.
epitaphs¶
- Default: 3
- Minimum: 0
- Maximum: 1000
- Flags: must_restart, experimental, wizard
Maximum number of messages the child can add to its gravestone. This allows the child to pass information to its successor through the manager. A value of 0 (zero) blocks this channel of communication.
esi_iovs¶
- Units: struct iovec
- Default: 10
- Minimum: 3
- Maximum: 1024
- Flags: wizard
Number of io vectors to allocate on the thread workspace for ESI requests.
esi_limit¶
- Units: parallel transactions
- Default: 10
- Minimum: 1
- Maximum: 255
Limit for the number of ESI fragments processed in parallel at each ESI level for each client request.
experimental¶
- Default: none
Enable/Disable experimental features.
- none
- Disable all experimental features
Use +/- prefix to set/reset individual bits:
- drop_pools
- Drop thread pools
- vcl_connect
- Allow return(connect) in VCL
- pipe_splice
- Use splice(2) for pipe mode
feature¶
- Default: +validate_client_responses,+validate_backend_requests,+vcl_req_reset,+vcl_ban
Enable/Disable various minor features.
- none
- Disable all features.
- default
- Set default value.
Use +/- prefix to enable/disable individual feature:
- short_panic
- Short panic message.
- no_coredump
- No coredumps.
- esi_ignore_https
- Treat HTTPS as HTTP in ESI:includes
- esi_disable_xml_check
- Don’t check of body looks like XML
- esi_ignore_other_elements
- Ignore non-esi XML-elements
- esi_remove_bom
- Remove UTF-8 BOM
- https_scheme
- Also split https URIs
- esi_include_onerror
- Parse the onerror attribute of <esi:include> tags.
- http2
- Support HTTP/2 protocol
- http_date_postel
- Relax parsing of timestamps in HTTP headers
- busy_stats_rate
- Make busy workers comply with thread_stats_rate
- validate_client_responses
- Check client HTTP responses for invalid characters
- validate_backend_requests
- Check backend HTTP requests for invalid characters
- vcl_req_reset
- Stop processing client VCL once the client is gone.
- vcl_ban
- Enable bans in VCL.
fetch_chunksize¶
- Units: bytes
- Default: 16k
- Minimum: 4k
- Flags: experimental
The default chunksize used by fetcher. This should be bigger than the majority of objects with short TTLs. Internal limits in the storage_file module makes increases above 128kb a dubious idea.
fetch_maxchunksize¶
- Units: bytes
- Default: 0.25G
- Minimum: 64k
- Flags: experimental
The maximum chunksize we attempt to allocate from storage. Making this too large may cause delays and storage fragmentation.
first_byte_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
Default timeout for receiving first byte from backend. We only wait for this many seconds for the first byte before giving up. A value of 0 means it will never time out. VCL can override this default value for each backend and backend request. This parameter does not apply to pipe.
gzip_buffer¶
- Units: bytes
- Default: 32k
- Minimum: 2k
- Flags: experimental
Size of malloc buffer used for gzip processing. These buffers are used for in-transit data, for instance gunzip’ed data being sent to a client.Making this space to small results in more overhead, writes to sockets etc, making it too big is probably just a waste of memory.
gzip_memlevel¶
- Default: 8
- Minimum: 1
- Maximum: 9
Gzip memory level 1=slow/least, 9=fast/most compression. Memory impact is 1=1k, 2=2k, … 9=256k.
h2_header_table_size¶
- Units: bytes
- Default: 4k
- Minimum: 0b
HTTP2 header table size. This is the size that will be used for the HPACK dynamic decoding table.
The value of this parameter defines SETTINGS_HEADER_TABLE_SIZE in the initial SETTINGS frame sent to the client when a new HTTP2 session is established.
h2_initial_window_size¶
- Units: bytes
- Default: 65535b
- Minimum: 65535b
- Maximum: 2147483647b
HTTP2 initial flow control window size.
The value of this parameter defines SETTINGS_INITIAL_WINDOW_SIZE in the initial SETTINGS frame sent to the client when a new HTTP2 session is established.
h2_max_concurrent_streams¶
- Units: streams
- Default: 100
- Minimum: 0
HTTP2 Maximum number of concurrent streams. This is the number of requests that can be active at the same time for a single HTTP2 connection.
The value of this parameter defines SETTINGS_MAX_CONCURRENT_STREAMS in the initial SETTINGS frame sent to the client when a new HTTP2 session is established.
h2_max_frame_size¶
- Units: bytes
- Default: 16k
- Minimum: 16k
- Maximum: 16777215b
HTTP2 maximum per frame payload size we are willing to accept.
The value of this parameter defines SETTINGS_MAX_FRAME_SIZE in the initial SETTINGS frame sent to the client when a new HTTP2 session is established.
h2_max_header_list_size¶
- Units: bytes
- Default: 0b
- Minimum: 0b
- Maximum: 2147483647b
HTTP2 maximum size of an uncompressed header list. This parameter is not mapped to SETTINGS_MAX_HEADER_LIST_SIZE in the initial SETTINGS frame, the http_req_size parameter is instead.The http_req_size advises HTTP2 clients of the maximum size for the header list. Exceeding http_req_size results in a reset stream after processing the HPACK block to perserve the connection, but exceeding h2_max_header_list_size results in the HTTP2 connection going away immediately.
If h2_max_header_list_size is lower than http_req_size, it has no effect, except for the special value zero interpreted as 150% of http_req_size.
h2_rapid_reset¶
- Units: seconds
- Default: 1.000
- Minimum: 0.000
- Flags: delayed, experimental
The upper threshold for how soon an http/2 RST_STREAM frame has to be parsed after a HEADERS frame for it to be treated as suspect and subjected to the rate limits specified by h2_rapid_reset_limit and h2_rapid_reset_period.Changes to this parameter affect the default for new HTTP2 sessions.
h2_rapid_reset_limit¶
- Default: 100
- Minimum: 0
- Flags: delayed, experimental
HTTP2 RST Allowance.
Specifies the maximum number of allowed stream resets issued by a client over a time period before the connection is closed. Setting this parameter to 0 disables the limit.Changes to this parameter affect the default for new HTTP2 sessions.
h2_rapid_reset_period¶
- Units: seconds
- Default: 60.000
- Minimum: 1.000
- Flags: delayed, experimental, wizard
HTTP2 sliding window duration for h2_rapid_reset_limit.Changes to this parameter affect the default for new HTTP2 sessions.
h2_rx_window_increment¶
- Units: bytes
- Default: 1M
- Minimum: 1M
- Maximum: 1G
- Flags: wizard
HTTP2 Receive Window Increments. How big credits we send in WINDOW_UPDATE frames Only affects incoming request bodies (ie: POST, PUT etc.)
h2_rx_window_low_water¶
- Units: bytes
- Default: 10M
- Minimum: 65535b
- Maximum: 1G
- Flags: wizard
HTTP2 Receive Window low water mark. We try to keep the window at least this big Only affects incoming request bodies (ie: POST, PUT etc.)
h2_rxbuf_storage¶
- Default: Transient
- Flags: must_restart
The name of the storage backend that HTTP/2 receive buffers should be allocated from.
h2_window_timeout¶
- Units: seconds
- Default: 5.000
- Minimum: 0.000
- Flags: wizard
HTTP2 time limit without window credits. How long a stream may wait for the client to credit the window and allow for more DATA frames to be sent.
http_brotli_support¶
- Units: bool
- Default: on
- Enable brotli support. When enabled Varnish requests compressed objects from the backend and store them compressed. If a client does not support brotli encoding Varnish will uncompress compressed objects on demand. Varnish will also rewrite the Accept-Encoding header of clients indicating support for gzip to:
- Accept-Encoding: br
Clients that do not support brotli will have it removed from the Accept-Encoding header removed. When brotli support is disabled the variables brotli.compress() and brotli.decompress() have no effect in VCL.
http_gzip_support¶
- Units: bool
- Default: on
- Enable gzip support. When enabled Varnish request compressed objects from the backend and store them compressed. If a client does not support gzip encoding Varnish will uncompress compressed objects on demand. Varnish will also rewrite the Accept-Encoding header of clients indicating support for gzip to:
- Accept-Encoding: gzip
Clients that do not support gzip will have their Accept-Encoding header removed. For more information on how gzip is implemented please see the chapter on gzip in the Varnish reference.
When gzip support is disabled the variables beresp.do_gzip and beresp.do_gunzip have no effect in VCL.
http_max_hdr¶
- Units: header lines
- Default: 64
- Minimum: 32
- Maximum: 65535
Maximum number of HTTP header lines we allow in {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number of headers). Cheap, ~20 bytes, in terms of workspace memory. Note that the first line occupies five header lines.
http_req_hdr_len¶
- Units: bytes
- Default: 8k
- Minimum: 40b
Maximum length of any HTTP client request header we will allow. The limit is inclusive its continuation lines.
http_req_size¶
- Units: bytes
- Default: 32k
- Minimum: 0.25k
Maximum number of bytes of HTTP client request we will deal with. This is a limit on all bytes up to the double blank line which ends the HTTP request. The memory for the request is allocated from the client workspace (param: workspace_client) and this parameter limits how much of that the request is allowed to take up.
For HTTP2 clients, it is advertised as MAX_HEADER_LIST_SIZE in the initial SETTINGS frame.
http_resp_hdr_len¶
- Units: bytes
- Default: 8k
- Minimum: 40b
Maximum length of any HTTP backend response header we will allow. The limit is inclusive its continuation lines.
http_resp_size¶
- Units: bytes
- Default: 32k
- Minimum: 0.25k
Maximum number of bytes of HTTP backend response we will deal with. This is a limit on all bytes up to the double blank line which ends the HTTP response. The memory for the response is allocated from the backend workspace (param: workspace_backend) and this parameter limits how much of that the response is allowed to take up.
idle_send_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
- Flags: delayed
Send timeout for individual pieces of data on client connections. May get extended if ‘send_timeout’ applies.
When this timeout is hit, the session is closed.
See the man page for setsockopt(2) under SO_SNDTIMEO
for more information.
last_byte_timeout¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
Maximum amount of time to wait for a complete backend response. A value of zero means wait forever.
lru_interval¶
- Units: seconds
- Default: 2.000
- Minimum: 0.000
- Flags: experimental
Grace period before object moves on LRU list. Objects are only moved to the front of the LRU list if they have not been moved there already inside this timeout period. This reduces the amount of lock operations necessary for LRU list access.
max_restarts¶
- Units: restarts
- Default: 4
- Minimum: 0
Upper limit on how many times a request can restart.
max_retries¶
- Units: retries
- Default: 4
- Minimum: 0
Upper limit on how many times a backend fetch can retry.
max_vcl¶
- Default: 100
- Minimum: 0
Threshold of loaded VCL programs. (VCL labels are not counted.) Parameter max_vcl_handling determines behaviour.
max_vcl_handling¶
- Default: 1
- Minimum: 0
- Maximum: 2
Behaviour when attempting to exceed max_vcl loaded VCL.
- 0 - Ignore max_vcl parameter.
- 1 - Issue warning.
- 2 - Refuse loading VCLs.
memory_arenas¶
- Units: arenas
- Default: 0
- Minimum: 0
- Flags: must_restart, experimental
Number of jemalloc arenas to use for object payload storage. When zero, object payload allocations are distributed among the default arenas together with all other allocations. When non-zero it specifies the number of arenas to dedicate to object payload allocations.
memory_stat_interval¶
- Units: seconds
- Default: 0.100
- Minimum: 0.001
Interval between updates of the memory usage statistics when the memory governor is active (see varnish-mse(7)). Shorter interval may allow the system to react faster to changes in memory usage.
memory_target¶
- Units: bytes|percentage
- Default: “80.00%”
- Minimum: 1M
Target RssAnon memory usage of the cache worker process when the memory governor is active (see varnish-mse(7)). May be specified as either a percentage of total system memory on the server, or a an byte value. Negative byte values are read as the memory size to not use. A zero byte target is permitted, but if a positive byte value is specified it needs to be greater than the minimum value. May be changed at runtime.
nuke_limit¶
- Units: allocations
- Default: 50
- Minimum: 0
- Flags: experimental
Maximum number of objects we attempt to nuke in order to make space for a object body.
numa_aware¶
NB: This parameter depends on a feature which is not available on all platforms.
- Units: bool
- Default: off
- Flags:
Become NUMA aware to utilize systems with more than one CPU more efficiently. -p reuseport=on is a prerequisite to enable this. The kernel also need to support both SO_ATTACH_REUSEPORT_EBPF and the helper function BPF_FUNC_get_numa_node_id .
NB: MUST BE SET ON THE COMMAND LINE.
object_mutex_slots¶
- Units: slots
- Default: 4096
- Minimum: 1
- Maximum: 65535
- Flags: must_restart
Number of mutex and condvar slots for per object signalling. Objects are assigned randomly to one of these slots. Increasing this number may reduce mutex contention and spurious thread wake ups.
pcre_match_limit¶
- Default: 10000
- Minimum: 1
The limit for the number of calls to the internal match() function in pcre_exec().
(See: PCRE_EXTRA_MATCH_LIMIT in pcre docs.)
This parameter limits how much CPU time regular expression matching can soak up.
pcre_match_limit_recursion¶
- Default: 20
- Minimum: 1
The recursion depth-limit for the internal match() function in a pcre_exec().
(See: PCRE_EXTRA_MATCH_LIMIT_RECURSION in pcre docs.)
This puts an upper limit on the amount of stack used by PCRE for certain classes of regular expressions.
We have set the default value low in order to prevent crashes, at the cost of possible regexp matching failures.
Matching failures will show up in the log as VCL_Error messages with regexp errors -27 or -21.
Testcase r01576 can be useful when tuning this parameter.
ping_interval¶
- Units: seconds
- Default: 3
- Minimum: 0
- Flags: must_restart
Interval between pings from parent to child. Zero will disable pinging entirely, which makes it possible to attach a debugger to the child.
pipe_timeout¶
- Units: seconds
- Default: 60.000
- Minimum: 0.000
Idle timeout for PIPE sessions. If nothing have been received in either direction for this many seconds, the session is closed.
pool_req¶
- Default: 10,100,10
Parameters for per worker pool request memory pool. The three numbers are:
- min_pool
- minimum size of free pool.
- max_pool
- maximum size of free pool.
- max_age
- max age of free element.
pool_sess¶
- Default: 10,100,10
Parameters for per worker pool session memory pool. The three numbers are:
- min_pool
- minimum size of free pool.
- max_pool
- maximum size of free pool.
- max_age
- max age of free element.
pool_sslbuffer¶
- Default: 10,100,10
Parameters for ssl buffer pool. The three numbers are:
- min_pool
- minimum size of free pool.
- max_pool
- maximum size of free pool.
- max_age
- max age of free element.
pool_vbo¶
- Default: 10,100,10
Parameters for backend object fetch memory pool. The three numbers are:
- min_pool
- minimum size of free pool.
- max_pool
- maximum size of free pool.
- max_age
- max age of free element.
prefer_ipv6¶
- Units: bool
- Default: off
Prefer IPv6 address when connecting to backends which have both IPv4 and IPv6 addresses.
reuseport¶
NB: This parameter depends on a feature which is not available on all platforms.
- Units: bool
- Default: off
- Flags:
Listen to clients on an already bound address. Also, create a listen group to load balance incoming clients between the pools. This requires SO_REUSEPORT support from the kernel.
NB: MUST BE SET ON THE COMMAND LINE.
rush_exponent¶
- Units: requests per request
- Default: 3
- Minimum: 2
- Flags: experimental
How many parked request we start for each completed request on the object. NB: Even with the implict delay of delivery, this parameter controls an exponential increase in number of worker threads.
scoreboard_active¶
- Default: on
- Flags: must_restart, experimental
Deprecated alias of vst_space. Turning this on will set the default value of vst_space.
send_timeout¶
- Units: seconds
- Default: 600.000
- Minimum: 0.000
- Flags: delayed
Total timeout for ordinary HTTP1 responses. Does not apply to some internally generated errors and pipe mode.
When ‘idle_send_timeout’ is hit while sending an HTTP1 response, the timeout is extended unless the total time already taken for sending the response in its entirety exceeds this many seconds.
When this timeout is hit, the session is closed
shm_reclen¶
- Units: bytes
- Default: 4084b
- Minimum: 16b
- Maximum: 4084
Old name for vsl_reclen, use that instead.
shortlived¶
- Units: seconds
- Default: 10.000
- Minimum: 0.000
Objects created with (ttl+grace+keep) shorter than this are always put in transient storage.
shutdown_close¶
- Units: bool
- Default: off
Control if listen sockets should be closed during ‘shutdown_delay’ upon reception of SIGTERM.
shutdown_delay¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
Delay before shutting down the management process upon reception of SIGTERM. Varnish will wait for ‘shutdown_delay’ seconds before terminating. When ‘shutdown_close’ is enabled, it will also stop accepting new connections (see socket.close command) during that time. When zero is specified, the shutdown is immediate.
sigsegv_handler¶
- Units: bool
- Default: on
- Flags: must_restart
Install a signal handler which tries to dump debug information on segmentation faults, bus errors and abort signals.
slicer_excess_ratio¶
- Default: 0.5
- Minimum: 0
- Maximum: 1
How much larger than the configured segment size we allow the last segment to be. This parameter is specified as a ratio of the configured segment size. The default value of 0.5 will allow the final segment to be up to 1.5 times the configured segment size. NOTE: Changing this parameter on a running Varnish will cause a cache miss at the tail end of a slicer delivery.
startup_timeout¶
- Units: seconds
- Default: 600.000
- Minimum: 0.000
- Flags: experimental
Timeout for CLI commands during the child’s startup, including automatic restarts from the manager. This parameter only takes effect if it is larger than cli_timeout.
tcp_keepalive_intvl¶
- Units: seconds
- Default: 75.000
- Minimum: 1.000
- Maximum: 100.000
- Flags: experimental
The number of seconds between TCP keep-alive probes. Ignored for Unix domain sockets.
tcp_keepalive_probes¶
- Units: probes
- Default: 9
- Minimum: 1
- Maximum: 100
- Flags: experimental
The maximum number of TCP keep-alive probes to send before giving up and killing the connection if no response is obtained from the other end. Ignored for Unix domain sockets.
tcp_keepalive_time¶
- Units: seconds
- Default: 300.000
- Minimum: 1.000
- Maximum: 7200.000
- Flags: experimental
The number of seconds a connection needs to be idle before TCP begins sending out keep-alive probes. Ignored for Unix domain sockets.
thread_pool_add_delay¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
- Flags: experimental
Wait at least this long after creating a thread.
Some (buggy) systems may need a short (sub-second) delay between creating threads. Set this to a few milliseconds if you see the ‘threads_failed’ counter grow too much.
Setting this too high results in insufficient worker threads.
thread_pool_destroy_delay¶
- Units: seconds
- Default: 1.000
- Minimum: 0.010
- Flags: delayed, experimental
Wait this long after destroying a thread.
This controls the decay of thread pools when idle(-ish).
thread_pool_fail_delay¶
- Units: seconds
- Default: 0.200
- Minimum: 0.010
- Flags: experimental
Wait at least this long after a failed thread creation before trying to create another thread.
Failure to create a worker thread is often a sign that the end is near, because the process is running out of some resource. This delay tries to not rush the end on needlessly.
If thread creation failures are a problem, check that thread_pool_max is not too high.
It may also help to increase thread_pool_timeout and thread_pool_min, to reduce the rate at which treads are destroyed and later recreated.
thread_pool_max¶
- Units: threads
- Default: 5000
- Minimum: 100
- Flags: delayed
The maximum number of worker threads in each pool. The minimum value depends on thread_pool_min.
Do not set this higher than you have to, since excess worker threads soak up RAM and CPU and generally just get in the way of getting work done.
thread_pool_min¶
- Units: threads
- Default: 100
- Minimum: 5
- Maximum: 5000
- Flags: delayed
The minimum number of worker threads in each pool. The maximum value depends on thread_pool_max.
Increasing this may help ramp up faster from low load situations or when threads have expired.
Technical minimum is 5 threads, but this parameter is strongly recommended to be at least 10
thread_pool_reserve¶
- Units: threads
- Default: 0
- Maximum: 95
- Flags: delayed
The number of worker threads reserved for vital tasks in each pool.
Tasks may require other tasks to complete (for example, client requests may require backend requests, http2 sessions require streams, which require requests). This reserve is to ensure that lower priority tasks do not prevent higher priority tasks from running even under high load.
The effective value is at least 5 (the number of internal priority classes), irrespective of this parameter. Default is 0 to auto-tune (5% of thread_pool_min). Minimum is 1 otherwise, maximum is 95% of thread_pool_min.
thread_pool_stack¶
- Units: bytes
- Default: 48k
- Minimum: 16k
- Flags: delayed
Worker thread stack size. This will likely be rounded up to a multiple of 4k (or whatever the page_size might be) by the kernel.
The required stack size is primarily driven by the depth of the call-tree. The most common relevant determining factors in varnish core code are GZIP (un)compression, ESI processing and regular expression matches. VMODs may also require significant amounts of additional stack. The nesting depth of VCL subs is another factor, although typically not predominant.
The stack size is per thread, so the maximum total memory required for worker thread stacks is in the order of size = thread_pools x thread_pool_max x thread_pool_stack.
Thus, in particular for setups with many threads, keeping the stack size at a minimum helps reduce the amount of memory required by Varnish.
On the other hand, thread_pool_stack must be large enough under all circumstances, otherwise varnish will crash due to a stack overflow. Usually, a stack overflow manifests itself as a segmentation fault (aka segfault / SIGSEGV) with the faulting address being near the stack pointer (sp).
Unless stack usage can be reduced, thread_pool_stack must be increased when a stack overflow occurs. Setting it in 150%-200% increments is recommended until stack overflows cease to occur.
thread_pool_timeout¶
- Units: seconds
- Default: 300.000
- Minimum: 10.000
- Flags: delayed, experimental
Thread idle threshold.
Threads in excess of thread_pool_min, which have been idle for at least this long, will be destroyed.
thread_pool_track¶
- Default: off
- Flags: delayed, experimental
Keep track of running worker threads managed by thread pools, and tasks queued in the pools. Once enabled, this will only track new tasks and will only take effect if there is some vst_space to store task tracking.
Likewise, once disabled ongoing tasks may still appear to be tracked until they reach their next step.
thread_pool_watchdog¶
- Units: seconds
- Default: 60.000
- Minimum: 0.100
- Flags: experimental
Thread queue stuck watchdog.
If no queued work have been released for this long, the worker process panics itself.
thread_pools¶
- Units: pools
- Default: 2
- Minimum: 1
- Maximum: 32
- Flags: delayed, experimental
Number of worker thread pools.
Increasing the number of worker pools decreases lock contention. Each worker pool also has a thread accepting new connections, so for very high rates of incoming new connections on systems with many cores, increasing the worker pools may be required.
Too many pools waste CPU and RAM resources, and more than one pool for each CPU is most likely detrimental to performance.
Can be increased on the fly, but decreases require a restart to take effect.
thread_queue_limit¶
- Default: 20
- Minimum: 0
- Flags: experimental
Permitted request queue length per thread-pool.
This sets the number of requests we will queue, waiting for an available thread. Above this limit sessions will be dropped instead of queued.
thread_stats_rate¶
- Units: requests
- Default: 10
- Minimum: 0
- Flags: experimental
Worker threads accumulate statistics, and dump these into the global stats counters if the lock is free when they finish a job (request/fetch etc.) This parameters defines the maximum number of jobs a worker thread may handle, before it is forced to dump its accumulated stats into the global counters.
timeout_idle¶
- Units: seconds
- Default: 5.000
- Minimum: 0.000
Idle timeout for client connections.
An HTTP/1 connection is considered idle until the first header byte is received, at which point timeout_req is in effect, and timeout_idle limits the time between header bytes. For HTTP/1 keep-alive, optional CRLF sequences between requests aren’t considered header bytes.
For h2 connections, this is the maximum time to get a complete frame, unless it is a HEADERS frame, at which point timeout_req is in effect and timeout_idle limits the time between frame bytes. This parameter is only enforced for incomplete frames when there are no ongoing streams on the connection.
When PROXY protocol is expected on a listen socket, timeout_idle defines the limit until a complete PROXYv1 or PROXYv2 header is received.
When timeout_idle is reached the client connection is closed.
timeout_linger¶
- Units: seconds
- Default: 0.050
- Minimum: 0.000
- Flags: experimental
How long the worker thread lingers on an idle session before handing it over to the waiter. When sessions are reused, as much as half of all reuses happen within the first 100 msec of the previous request completing. Setting this too high results in worker threads not doing anything for their keep, setting it too low just means that more sessions take a detour around the waiter.
timeout_req¶
- Units: seconds
- Default: 5.000
- Minimum: 0.000
- Flags: experimental
Maximum time to receive a client request’s headers.
For HTTP/1 connections, this is measured from the moment the first header byte was received.
For h2 connections, this is the maximum time until a CONTINUATION frame with the END_HEADERS flag is received once a HEADERS frame without the flag is received. This is signalled by a GOAWAY frame with the COMPRESSION_ERROR error.
When timeout_req is reached the client connection is closed.
timeout_reqbody¶
- Units: seconds
- Default: 0.000
- Minimum: 0.000
- Flags: experimental
Maximum time to receive a client request body, measured from the moment the first header byte was received.
When timeout_reqbody is reached the client connection is closed.
tls_handshake_timeout¶
- Units: seconds
- Default: 8.000
- Minimum: 0.000
Default timeout for completion of the TLS handshake. We only wait for this many seconds for the handshake to complete before giving up.
transit_buffer¶
- Units: bytes
- Default: 0b
- Minimum: 0b
The default prefetch amount used during a single private transaction. Enabling this will prevent running out of memory when there are big streaming transfers going on. Setting the value to zero will disable the feature, however when enabled the minimum value is 4KB.
vcc_unsafe_path¶
- Units: bool
- Default: on
Allow ‘/’ in vmod & include paths. Allow ‘import … from …’.
vcl_cooldown¶
- Units: seconds
- Default: 600.000
- Minimum: 0.000
How long a VCL is kept warm after being replaced as the active VCL (granularity approximately 30 seconds).
vcl_dir¶
- Default: /usr/local/etc/varnish:/usr/local/share/varnish-plus/vcl
Old name for vcl_path, use that instead.
vcl_path¶
- Default: /usr/local/etc/varnish:/usr/local/share/varnish-plus/vcl
Directory (or colon separated list of directories) from which relative VCL filenames (vcl.load and include) are to be found. By default Varnish searches VCL files in both the system configuration and shared data directories to allow packages to drop their VCL files in a standard location where relative includes would work.
vmod_http_max_conn¶
- Units: connections
- Default: 25
- Minimum: 0
- Maximum: 100000
- Flags: delayed
The maximum number of connections kept open for reuse in each execution thread.
vmod_http_max_tasks_total¶
- Units: tasks
- Default: 1000
- Minimum: 1
The maximum number of active tasks to allow before new tasks will be rejected.
vmod_http_min_tasks_thread¶
- Units: tasks
- Default: 100
- Minimum: 1
- Maximum: 1000
The minimum number of tasks per execution thread to aim for before splitting the load on additional threads.
vmod_http_pool_timeout¶
- Units: seconds
- Default: 118.000
- Minimum: 1.000
- Flags: delayed
Timeout before we close unused idle connections. If this timeout is increased, prior connections may be kept longer in the pool, including connections in the CLOSE-WAIT state.
vmod_http_threads¶
- Units: threads
- Default: 10
- Minimum: 1
- Maximum: 1000
- Flags: must_reload
The number of vmod_http threads used to service abandoned requests.
vmod_path¶
- Default: /usr/local/lib/varnish-plus/vmods
Directory (or colon separated list of directories) where VMODs are to be found.
vsl_buffer¶
- Units: bytes
- Default: 4k
- Minimum: 4096
Bytes of (req-/backend-)workspace dedicated to buffering VSL records. When this parameter is adjusted, most likely workspace_client and workspace_backend will have to be adjusted by the same amount.
Setting this too high costs memory, setting it too low will cause more VSL flushes and likely increase lock-contention on the VSL mutex.
The minimum tracks the vsl_reclen parameter + 12 bytes.
vsl_mask¶
- Default: -Debug,-ObjProtocol,-ObjStatus,-ObjReason,-ObjHeader,-VCL_trace,-ExpKill,-WorkThread,-Hash,-VfpAcct,-H2RxHdr,-H2RxBody,-H2TxHdr,-H2TxBody
Mask individual VSL messages from being logged.
- default
- Set default value
Use +/- prefix in front of VSL tag name, to mask/unmask individual VSL messages.
vsl_reclen¶
- Units: bytes
- Default: 4084b
- Minimum: 16b
- Maximum: 4084b
Maximum number of bytes in SHM log record.
The maximum tracks the vsl_buffer parameter - 12 bytes.
vsl_space¶
- Units: bytes
- Default: 80M
- Minimum: 1M
- Maximum: 4G
- Flags: must_restart
The amount of space to allocate for the VSL fifo buffer in the VSM memory segment. If you make this too small, varnish{ncsa|log} etc will not be able to keep up. Making it too large just costs memory resources.
vsm_free_cooldown¶
- Units: seconds
- Default: 60.000
- Minimum: 10.000
- Maximum: 600.000
How long VSM memory is kept warm after a deallocation (granularity approximately 2 seconds).
vsm_publish_interval¶
- Units: seconds
- Default: 1.000
- Minimum: 0.000
- Maximum: 60.000
The minimum interval that new VSM segment indexes are published. This parameter reduces the frequency that utilities will be notified of new VSM segment indexes.
vsm_space¶
- Units: bytes
- Default: 1M
- Minimum: 1M
- Maximum: 1G
DEPRECATED: This parameter is ignored. There is no global limit on amount of shared memory now.
vst_space¶
- Units: bytes
- Default: 10M
- Minimum: 1M
- Maximum: 100M
- Flags: must_restart, experimental
The amount of space to allocate for a VST memory segment. A small buffer may not be able to keep track of all running and queued tasks. Making it too large wastes memory resources. There is one VST segment per thread pool.
Using “none” instead of a number of bytes will disable the allocation of VST segments and prevent task tracking, even ifthread_pool_track is on.
workspace_backend¶
- Units: bytes
- Default: 64k
- Minimum: 1k
- Flags: delayed
Bytes of HTTP protocol workspace for backend HTTP req/resp. If larger than 4k, use a multiple of 4k for VM efficiency.
workspace_client¶
- Units: bytes
- Default: 64k
- Minimum: 9k
- Flags: delayed
Bytes of HTTP protocol workspace for clients HTTP req/resp. Use a multiple of 4k for VM efficiency. For HTTP/2 compliance this must be at least 20k, in order to receive fullsize (=16k) frames from the client. That usually happens only in POST/PUT bodies. For other traffic-patterns smaller values work just fine.
workspace_session¶
- Units: bytes
- Default: 0.75k
- Minimum: 0.25k
- Flags: delayed
Allocation size for session structure and workspace. The workspace is primarily used for TCP connection addresses. If larger than 4k, use a multiple of 4k for VM efficiency.
workspace_thread¶
- Units: bytes
- Default: 2k
- Minimum: 0.25k
- Maximum: 8k
- Flags: delayed
Bytes of auxiliary workspace per thread. This workspace is used for certain temporary data structures during the operation of a worker thread. One use is for the IO-vectors used during delivery. Setting this parameter too low may increase the number of writev() syscalls, setting it too high just wastes space. ~0.1k + UIO_MAXIOV * sizeof(struct iovec) (typically = ~16k for 64bit) is considered the maximum sensible value under any known circumstances (excluding exotic vmod use).
EXIT CODES¶
Varnish and bundled tools will, in most cases, exit with one of the following codes
- 0 OK
- 1 Some error which could be system-dependent and/or transient
- 2 Serious configuration / parameter error - retrying with the same configuration / parameters is most likely useless
The varnishd master process may also OR its exit code
- with 0x20 when the varnishd child process died,
- with 0x40 when the varnishd child process was terminated by a signal and
- with 0x80 when a core was dumped.
SEE ALSO¶
HISTORY¶
The varnishd daemon was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software.
This manual page was written by Dag-Erling Smørgrav with updates by Stig Sandbeck Mathisen <ssm@debian.org>, Nils Goroll and others.
COPYRIGHT¶
This document is licensed under the same licence as Varnish itself. See LICENCE for details.
- Copyright (c) 2007-2015 Varnish Software AS