FAQ

Page Discussion History

Difference between revisions of "HttpProxyModule"

m (NOINDEX added)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
__NOINDEX__
 
<span style="color:red">WARNING: this article is obsoleted. Please refer to http://nginx.org/en/docs/ for the latest official documentation.</span>
 
<span style="color:red">WARNING: this article is obsoleted. Please refer to http://nginx.org/en/docs/ for the latest official documentation.</span>
  
Line 111: Line 112:
 
== proxy_cache_lock ==
 
== proxy_cache_lock ==
 
<include wikitext nopre src="http://wiki.nginx.org/nginx.org/http/ngx_http_proxy_module/proxy_cache_lock.txt" />
 
<include wikitext nopre src="http://wiki.nginx.org/nginx.org/http/ngx_http_proxy_module/proxy_cache_lock.txt" />
 +
When enabled, only one request at a time will be allowed to populate a new cache element identified according to the [[#proxy_cache_key|proxy_cache_key]] directive by passing a request to a proxied server. Other requests of the same cache element will either wait for a response to appear in the cache or the cache lock for this element to be released, up to the time set by the [[#proxy_cache_lock_timeout|proxy_cache_lock_timeout]] directive. Similar effect for updating cache entry (this directive works only for inserting new cache element) can be archieved by using [[#proxy_cache_use_stale|proxy_cache_use_stale updating]] directive.
  
 
== proxy_cache_lock_timeout ==
 
== proxy_cache_lock_timeout ==
Line 629: Line 631:
 
location /fetch {
 
location /fetch {
 
   internal;
 
   internal;
   proxy_pass          http://backend;
+
   proxy_pass          http://backend/; #the last slash is necessary.
 
   proxy_store          on;
 
   proxy_store          on;
 
   proxy_store_access  user:rw  group:rw  all:r;
 
   proxy_store_access  user:rw  group:rw  all:r;

Latest revision as of 10:30, 10 April 2014

WARNING: this article is obsoleted. Please refer to http://nginx.org/en/docs/ for the latest official documentation.

Contents

Synopsis

This module makes it possible to transfer requests to another server.

Example:

location / {
  proxy_pass        http://localhost:8000;
  proxy_set_header  X-Real-IP  $remote_addr;
}

Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers.

Directives

proxy_bind

syntax: proxy_bind address

default: none

context: http, server, location

version: ≥ 0.8.22

example:

proxy_bind  192.168.1.1;

This directive binds each upstream socket to a local address before calling connect(). It may be useful if host has several interfaces/aliases and you want to pass outgoing connections from specific interface/address.

proxy_buffer_size

Syntax: proxy_buffer_size size
Default: 4k|8k
Context: http
server
location
Reference:proxy_buffer_size


This directive set the buffer size, into which will be read the first part of the response, obtained from the proxied server.

In this part of response the small response-header is located, as a rule.

By default, the buffer size is equal to the size of one buffer in directive proxy_buffers; however, it is possible to set it to less.

proxy_buffering

Syntax: proxy_buffering on | off
Default: on
Context: http
server
location
Reference:proxy_buffering


This directive activate response buffering of the proxied server.

If buffering is activated, then nginx reads the answer from the proxied server as fast as possible, saving it in the buffer as configured by directives proxy_buffer_size and proxy_buffers. If the response does not fit into memory, then parts of it will be written to disk.

If buffering is switched off, then the response is synchronously transferred to client immediately as it is received. nginx does not attempt to read the entire answer from the proxied server, the maximum size of data which nginx can accept from the server is set by directive proxy_buffer_size.

Also note that caching upstream proxy responses won't work if proxy_buffering is set to off.

For Comet applications based on long-polling it is important to set proxy_buffering to off, otherwise the asynchronous response is buffered and the Comet does not work.

Buffering can be set on a per-request basis by setting the X-Accel-Buffering header in the proxy response.

proxy_buffers

Syntax: proxy_buffers number size
Default: 8 4k|8k
Context: http
server
location
Reference:proxy_buffers


This directive sets the number and the size of buffers, into which will be read the answer, obtained from the proxied server. By default, the size of one buffer is equal to the size of page. Depending on platform this is either 4K or 8K.

proxy_busy_buffers_size

Syntax: proxy_busy_buffers_size size
Default: 8k|16k
Context: http
server
location
Reference:proxy_busy_buffers_size


proxy_cache

Syntax: proxy_cache zone | off
Default: off
Context: http
server
location
Reference:proxy_cache


This directive sets name of zone for caching. The same zone can be used in multiple places.

The cache honors backend's "Expires", "Cache-Control: no-cache", and "Cache-Control: max-age=XXX" headers since version 0.7.48. Since version 7.66, "private" and "no-store" are also honored. nginx does not handle "Vary" headers when caching. In order to ensure private items are not served to all users unintentionally by the cache, the back-end can set "no-cache" or "max-age=0", or the proxy_cache_key must include user-specific data such as $cookie_xxx. However, using cookie values as part of proxy_cache_key can defeat the benefits of caching for public items, so separate locations with different proxy_cache_key values might be necessary to separate private and public items.

The cache depends on proxy buffers, and will not work if proxy_buffers is set to off.

The following response headers flag a response as uncacheable unless they are ignored:

  • Set-Cookie
  • Cache-Control containing "no-cache", "no-store", "private", or a "max-age" with a non-numeric or 0 value
  • Expires with a time in the past
  • X-Accel-Expires: 0

proxy_cache_bypass

Syntax: proxy_cache_bypass string ...
Default:
Context: http
server
location
Reference:proxy_cache_bypass


The directive specifies the conditions under which the answer will not be taken from the cache. If at least one of a string variable is not empty and not equal to "0", the answer is not taken from the cache:

 proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
 proxy_cache_bypass $http_pragma $http_authorization;

Note that the response from the back-end is still eligible for caching. Thus one way of refreshing an item in the cache is sending a request with a header you pick yourself, e.g. "My-Secret-Header: 1", then having a proxy_cache_bypass line like:

proxy_cache_bypass $http_my_secret_header;

Can be used in conjunction with the directive proxy_no_cache.

proxy_cache_key

Syntax: proxy_cache_key string
Default: $scheme$proxy_host$request_uri
Context: http
server
location
Reference:proxy_cache_key


The directive specifies what information is included in the key for caching, for example

proxy_cache_key "$host$request_uri$cookie_user";

Note that by default, the hostname of the server is not included in the cache key. If you are using subdomains for different locations on your website, you need to include it, e.g. by changing the cache key to something like

proxy_cache_key "$scheme$host$request_uri";

proxy_cache_lock

Syntax: proxy_cache_lock on | off
Default: off
Context: http
server
location
Appeared in: 1.1.12
Reference:proxy_cache_lock

When enabled, only one request at a time will be allowed to populate a new cache element identified according to the proxy_cache_key directive by passing a request to a proxied server. Other requests of the same cache element will either wait for a response to appear in the cache or the cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive. Similar effect for updating cache entry (this directive works only for inserting new cache element) can be archieved by using proxy_cache_use_stale updating directive.

proxy_cache_lock_timeout

Syntax: proxy_cache_lock_timeout time
Default: 5s
Context: http
server
location
Appeared in: 1.1.12
Reference:proxy_cache_lock_timeout


proxy_cache_methods

syntax: proxy_cache_methods [GET HEAD POST];

default: proxy_cache_methods GET HEAD;

context: http, server, location

GET/HEAD is syntax sugar, i.e. you can not disable GET/HEAD even if you set just

proxy_cache_min_uses

Syntax: proxy_cache_min_uses number
Default: 1
Context: http
server
location
Reference:proxy_cache_min_uses


Number of queries, after which reply will be cached.

proxy_cache_path

Syntax: proxy_cache_path path [ levels = levels ] keys_zone = name : size [ inactive = time ] [ max_size = size ] [ loader_files = number ] [ loader_sleep = time ] [ loader_threshold = time ]
Default:
Context: http
Reference:proxy_cache_path


This directive sets the cache path and other cache parameters. Cached data is stored in files. An MD5 hash of the proxied URL is used as the key for the cache entry, and is also used as the filename in the cache path for the response contents and metadata. The levels parameter sets the number of subdirectory levels in cache. For example:

proxy_cache_path  /data/nginx/cache/one  levels=1:2   keys_zone=one:10m;

In this cache, file names will be like the following:

/data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c

You may use any combination of 1 and 2 in the level formats: X, X:X, or X:X:X e.g.: "2", "2:2", "1:1:2". There can be at most 3 levels.

All active keys and metadata is stored in shared memory. Zone name and the size of the zone is defined via the keys_zone parameter.

Note that each defined zone must have a unique path. For example:

proxy_cache_path  /data/nginx/cache/one    levels=1      keys_zone=one:10m;
proxy_cache_path  /data/nginx/cache/two    levels=2:2    keys_zone=two:100m;
proxy_cache_path  /data/nginx/cache/three  levels=1:1:2  keys_zone=three:1000m;

If cached data is not requested for time defined by the inactive parameter, than that data is removed from the cache. The inactive parameter defaults to 10 minutes (10m).

A special process, called "cache manager", is created to control the on-disk cache. It is responsible for removing inactive items and enforcing the size of the cache, as defined by the parameter max_size. When the total size of the cache exceeds the maximum size set by max_size, the least recently used data in the cache is deleted to make room for a new cache entry (a LRU replacement policy).

Zone size should be set proportional to number of pages to cache. The size of the metadata for one page (file) depends on the OS; currently it is 64 bytes for FreeBSD/i386, and 128 bytes for FreeBSD/amd64.

The directories specified by proxy_cache_path and proxy_temp_path should be located on the same filesystem.

proxy_cache_use_stale

Syntax: proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_404 | off ...
Default: off
Context: http
server
location
Reference:proxy_cache_use_stale


This directive tells Nginx when to serve a stale item from the proxy cache. The parameters for this directive are similar to proxy_next_upstream with the addition of 'updating'.

To prevent cache stampedes (when multiple threads stampede in to try to update the cache simultaneously) you can specify the 'updating' parameter. This will cause one thread to update the cache and while the update is in progress all other threads will serve the stale version of what is in the cache.

proxy_cache_valid

Syntax: proxy_cache_valid [ code ...] time
Default:
Context: http
server
location
Reference:proxy_cache_valid


This directive sets the time for caching different replies. Example:

sets 10 minutes cache time for replies with code 200 and 302, and 1 minute for 404s.

If only time is specified:

then only replies with codes 200, 301 and 302 will be cached.

Also it is possible to cache any replies with parameter "any":

Upstream cache-related directives have priority over proxy_cache_valid value, in particular the order is (from Igor):

  1. X-Accel-Expires
  2. Expires/Cache-Control
  3. proxy_cache_valid

The order in which your backend return HTTP headers change cache behaviour. Read this post for details.

You may ignore the headers using

proxy_ignore_headers X-Accel-Expires Expires Cache-Control;

Concerning If-Modified / Last-Modified since behaviour, please remember that by default nginx sends 304 only if L-M == I-M-S. Controlled by directive if_modified_since [off|exact|before]

Note: you must set this option for any persistent caching to occur.

proxy_connect_timeout

Syntax: proxy_connect_timeout time
Default: 60s
Context: http
server
location
Reference:proxy_connect_timeout


This directive assigns a timeout for the connection to the upstream server. It is necessary to keep in mind that this time out cannot be more than 75 seconds.

This is not the time until the server returns the pages, that is the proxy_read_timeout statement. If your upstream server is up, but hanging (e.g. it does not have enough threads to process your request so it puts you in the pool of connections to deal with later), then this statement will not help as the connection to the server has been made.

proxy_cookie_domain

Syntax: proxy_cookie_domain off
proxy_cookie_domain domain replacement
Default: off
Context: http
server
location
Appeared in: 1.1.15
Reference:proxy_cookie_domain


proxy_cookie_path

Syntax: proxy_cookie_path off
proxy_cookie_path path replacement
Default: off
Context: http
server
location
Appeared in: 1.1.15
Reference:proxy_cookie_path


proxy_headers_hash_bucket_size

syntax: proxy_headers_hash_bucket_size size;

default: proxy_headers_hash_bucket_size 64;

context: http, server, location, if

This directive sets the bucket size of the headers hash table.
This determines the limit of the header name. If you use header names longer than 64 characters then increase this.

proxy_headers_hash_max_size

syntax: proxy_headers_hash_max_size size;

default: proxy_headers_hash_max_size 512;

context: http, server, location, if

This directive sets the maximum size of the headers hash table.
Should not be smaller than the amount of headers your back-end is setting.

proxy_hide_header

Syntax: proxy_hide_header field
Default:
Context: http
server
location
Reference:proxy_hide_header


nginx does not transfer the "Date", "Server", "X-Pad" and "X-Accel-..." header lines from the proxied server response. The proxy_hide_header directive allows to hide some additional header lines. But if on the contrary the header lines must be passed, then the proxy_pass_header should be used. For example if you want to hide the MS-OfficeWebserver and the AspNet-Version:

location / {
  proxy_hide_header X-AspNet-Version;
  proxy_hide_header MicrosoftOfficeWebServer;
}

This directive can also be very helpful when using X-Accel-Redirect. For example, you may have one set of backend servers which return the headers for a file download, which includes X-Accel-Redirect to the actual file, as well as the correct Content-Type. However, the Redirect URL points to a files erver which hosts the actual file you wish to serve, and that server sends its own Content-Type header, which might be incorrect, and overrides the header sent by the original backend servers. You can avoid this by adding the proxy_hide_header directive to the fileserver. Example:

location / {
  proxy_pass http://backend_servers;
}
 
location /files/ {
  proxy_pass http://fileserver;
  proxy_hide_header Content-Type;
}

proxy_http_version

Syntax: proxy_http_version 1.0 | 1.1
Default: 1.0
Context: http
server
location
Appeared in: 1.1.4
Reference:proxy_http_version


proxy_ignore_client_abort

Syntax: proxy_ignore_client_abort on | off
Default: off
Context: http
server
location
Reference:proxy_ignore_client_abort


Prevents aborting request to proxy in case the client itself aborts the request.

proxy_ignore_headers

Syntax: proxy_ignore_headers field ...
Default:
Context: http
server
location
Reference:proxy_ignore_headers


Prohibits the processing of the header lines from the proxy server's response.

It can specify the string as "X-Accel-Redirect", "X-Accel-Expires", "Expires", "Cache-Control" or "Set-Cookie". By default, nginx does not caches requests with Set-Cookie.

proxy_intercept_errors

Syntax: proxy_intercept_errors on | off
Default: off
Context: http
server
location
Reference:proxy_intercept_errors


This directive decides if nginx will intercept responses with HTTP status codes of 400 and higher.

By default all responses will be sent as-is from the proxied server.

If you set this to on then nginx will intercept status codes that are explicitly handled by an error_page directive. Responses with status codes that do not match an error_page directive will be sent as-is from the proxied server.

proxy_max_temp_file_size

Syntax: proxy_max_temp_file_size size
Default: 1024m
Context: http
server
location
Reference:proxy_max_temp_file_size


The maximum size of a temporary file when the content is larger than the proxy buffer. If file is larger than this size, it will be served synchronously from upstream server rather than buffered to disk.

If proxy_max_temp_file_size is equal to zero, temporary files usage will be disabled.

proxy_method

syntax: proxy_method [method];

default: None

context: http, server, location

Allows you to override the HTTP method of the request to be passed to the backend server. If you specify POST for example, all requests forwarded to the backend server will be POST requests.

Example:

proxy_next_upstream

Syntax: proxy_next_upstream error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | off ...
Default: error timeout
Context: http
server
location
Reference:proxy_next_upstream


Directive determines in what cases the request will be transmitted to the next server:

  • error — an error has occurred while connecting to the server, sending a request to it, or reading its response;
  • timeout — occurred timeout during the connection with the server, transfer the request or while reading response from the server;
  • invalid_header — server returned a empty or incorrect answer;
  • http_500 — server returned answer with code 500
  • http_502 — server returned answer with code 502
  • http_503 — server returned answer with code 503
  • http_504 — server returned answer with code 504
  • http_404 — server returned answer with code 404
  • off — it forbids the request transfer to the next server

Transferring the request to the next server is only possible when nothing has been transferred to the client -- that is, if an error or timeout arises in the middle of the transfer of the request, then it is not possible to retry the current request on a different server.

proxy_no_cache

Syntax: proxy_no_cache string ...
Default:
Context: http
server
location
Reference:proxy_no_cache


Specifies in what cases a response will not be cached, e.g.

proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
proxy_no_cache $http_pragma $http_authorization;

The response is marked uncacheable if any of the arguments expand to anything other than "0" or the empty string. For instance, in the above example, the response will never be cached if the cookie "nocache" is set in the request.

proxy_pass

Syntax: proxy_pass URL
Default:
Context: location
if in location
limit_except
Reference:proxy_pass


This directive sets the address of the proxied server and the URI to which location will be mapped. Address may be given as hostname or address and port, for example,

proxy_pass http://localhost:8000/uri/;

or as unix socket path:

proxy_pass http://unix:/path/to/backend.socket:/uri/;

path is given after the word unix between two colons.

By default, the Host header from the request is not forwarded, but is set based on the proxy_pass statement. To forward the requested Host header, it is necessary to use:

proxy_set_header Host $host;

While passing request nginx replaces URI part which corresponds to location with one indicated in proxy_pass directive. But there are two exceptions from this rule when it is not possible to determine what to replace:

  • if the location is given by regular expression;
  • if inside proxied location URI is changed by rewrite directive, and this configuration will be used to process request (break):
location  /name/ {
  rewrite      /name/([^/] +)  /users?name=$1  break;
  proxy_pass   http://127.0.0.1;
}

For these cases of URI it is transferred without the mapping.

Furthermore, it is possible to indicate so that URI should be transferred in the same form as sent by client, not in processed form. During processing:

  • two or by more slashes are converted into one slash: "//" -- "/";
  • references to the current directory are removed: "/./" -- "/";
  • references to the previous catalog are removed: "/dir /../" -- "/".

If it is necessary to transmit URI in the unprocessed form then directive proxy_pass should be used without URI part:

location  /some/path/ {
  proxy_pass   http://127.0.0.1;
}

A special case is using variables in the proxy_pass statement: The requested URL is not used and you are fully responsible to construct the target URL yourself.

This means, the following is not what you want for rewriting into a zope virtual host monster, as it will proxy always to the same URL (within one server specification):

location / {
  proxy_pass   http://127.0.0.1:8080/VirtualHostBase/https/$server_name:443/some/path/VirtualHostRoot;
}

Instead use a combination of rewrite and proxy_pass:

location / {
  rewrite ^(.*)$ /VirtualHostBase/https/$server_name:443/some/path/VirtualHostRoot$1 break;
  proxy_pass   http://127.0.0.1:8080;
}

In this case URL sanitizing is done already as part of the rewriting process, i.e. a trailing slash with the proxy_pass statement has no further effect.

If you need the proxy connection to an upstream server group to use SSL, your proxy_pass rule should use https:// and you will also have to set your SSL port explicitly in the upstream definition. Example:

upstream backend-secure {
  server 10.0.0.20:443;
}
 
server {
  listen 10.0.0.1:443;
  location / {
    proxy_pass https://backend-secure;
  }
}

proxy_pass_header

Syntax: proxy_pass_header field
Default:
Context: http
server
location
Reference:proxy_pass_header


This directive allows transferring header-lines forbidden for response.

For example:

location / {
  proxy_pass_header X-Accel-Redirect;
}


proxy_pass_request_body

syntax: proxy_pass_request_body [ on | off ];

default: proxy_pass_request_body on;

context: http, server, location

version: ≥ 0.1.29

Defines whether or not the request body should be passed to the proxy.
Should usually be left on. If you switch it off, do not forget to add:

proxy_set_header Content-Length 0;

proxy_pass_request_headers

syntax: proxy_pass_request_headers [ on | off ];

default: proxy_pass_request_headers on;

context: http, server, location

version: ≥ 0.1.29

Defines whether or not the request headers should be passed to the proxy.
Should usually be left on.

proxy_redirect

Syntax: proxy_redirect default
proxy_redirect off
proxy_redirect redirect replacement
Default: default
Context: http
server
location
Reference:proxy_redirect


This directive sets the text, which must be changed in response-header "Location" and "Refresh" in the response of the proxied server.

Let us suppose the proxied server returned line Location: http://localhost:8000/two/some/uri/.

The directive

proxy_redirect http://localhost:8000/two/ http://frontend/one/;

will rewrite this line in the form Location: http://frontend/one/some/uri/.

In the replaceable line it is possible not to indicate the name of the server:

proxy_redirect http://localhost:8000/two/ /;

then the basic name of server and port is set, if it is different from 80.

The change by default, given by the parameter "default", uses the parameters of directives location and proxy_pass.

Therefore two following configurations are equivalent:

location /one/ {
  proxy_pass       http://upstream:port/two/;
  proxy_redirect   default;
}
 
location /one/ {
  proxy_pass       http://upstream:port/two/;
  proxy_redirect   http://upstream:port/two/   /one/;
}

In the replace line, it is possible to use some variables:

proxy_redirect   http://localhost:8000/    http://$host:$server_port/;

This directive repeated some times:

proxy_redirect   default;
proxy_redirect   http://localhost:8000/    /;
proxy_redirect   http://www.example.com/   /;

The parameter off forbids all proxy_redirect directives at this level:

proxy_redirect   off;
proxy_redirect   default;
proxy_redirect   http://localhost:8000/    /;
proxy_redirect   http://www.example.com/   /;

With the help of this directive it is possible to add the name of host for relative redirect, issued by the proxied server:

proxy_read_timeout

Syntax: proxy_read_timeout time
Default: 60s
Context: http
server
location
Reference:proxy_read_timeout


This directive sets the read timeout for the response of the proxied server. It determines how long nginx will wait to get the response to a request. The timeout is established not for entire response, but only between two operations of reading.

In contrast to proxy_connect_timeout, this timeout will catch a server that puts you in it's connection pool but does not respond to you with anything beyond that. Be careful though not to set this too low, as your proxy server might take a longer time to respond to requests on purpose (e.g. when serving you a report page that takes some time to compute). You are able though to have a different setting per location, which enables you to have a higher proxy_read_timeout for the report page's location.

proxy_redirect_errors

Deprecated. Use proxy_intercept_errors.

proxy_send_lowat

syntax: proxy_send_lowat [ on | off ];

default: proxy_send_lowat off;

context: http, server, location, if

This directive set SO_SNDLOWAT.
This directive is only available on FreeBSD

proxy_send_timeout

Syntax: proxy_send_timeout time
Default: 60s
Context: http
server
location
Reference:proxy_send_timeout


This directive assigns timeout with the transfer of request to the upstream server. Timeout is established not on entire transfer of request, but only between two write operations. If after this time the upstream server will not take new data, then nginx is shutdown the connection.

proxy_set_body

syntax: proxy_set_body value;

default: none

context: http, server, location, if

version: >= 0.3.10

Set the body value passed to the backend. This value can contain variables.

proxy_set_header

Syntax: proxy_set_header field value
Default: Host $proxy_host
Connection close
Context: http
server
location
Reference:proxy_set_header


This directive allows to redefine and to add some request header lines which will be transferred to the proxied server.

As the value it is possible to use a text, variables and their combination.

proxy_set_header directives issued at higher levels are only inherited when no proxy_set_header directives have been issued at a given level.

By default only two lines will be redefined:

proxy_set_header Host $proxy_host;
proxy_set_header Connection Close;

The unchanged request-header "Host" can be transmitted like this:

proxy_set_header Host $http_host;

However, if this line is absent from the client request, then nothing will be transferred.

In this case it is better to use variable $host, it's value is equal to the name of server in the request-header "Host" or to the basic name of server, if there is no line:

proxy_set_header Host $host;

Furthermore, it is possible to transmit the name of server together with the port of the proxied server:

proxy_set_header Host $host:$proxy_port;

If value is empty string, than header will not be sent to upstream. For example this setting can be used to disable gzip compression on upstream:

proxy_set_header  Accept-Encoding  "";

proxy_ssl_session_reuse

Syntax: proxy_ssl_session_reuse on | off
Default: on
Context: http
server
location
Reference:proxy_ssl_session_reuse


Attempt to reuse ssl session when connecting to upstream via https.

proxy_store

Syntax: proxy_store on | off | string
Default: off
Context: http
server
location
Reference:proxy_store


This directive sets the path in which upstream files are stored. The parameter "on" preserves files in accordance with path specified in directives alias or root. The parameter "off" forbids storing. Furthermore, the name of the path can be clearly assigned with the aid of the line with the variables:

proxy_store   /data/www$uri;

The time of modification for the file will be set to the date of "Last-Modified" header in the response. To be able to safe files in this directory it is necessary that the path is under the directory with temporary files, given by directive proxy_temp_path for the data location.

This directive can be used for creating the local copies for dynamic output of the backend which is not very often changed, for example:

location /images/ {
  root                 /data/www;
  error_page           404 = /fetch$uri;
}
 
location /fetch {
  internal;
  proxy_pass           http://backend/; #the last slash is necessary.
  proxy_store          on;
  proxy_store_access   user:rw  group:rw  all:r;
  proxy_temp_path      /data/temp;
  alias                /data/www;
}

or this way:

location /images/ {
  root                 /data/www;
  error_page           404 = @fetch;
}
 
location @fetch {
  internal;
 
  proxy_pass           http://backend;
  proxy_store          on;
  proxy_store_access   user:rw  group:rw  all:r;
  proxy_temp_path      /data/temp;
 
  root                 /data/www;
}

To be clear proxy_store is not a cache, it's rather mirror on demand.

proxy_store_access

Syntax: proxy_store_access users : permissions ...
Default: user:rw
Context: http
server
location
Reference:proxy_store_access


This directive assigns the permissions for the created files and directories, for example:

proxy_store_access  user:rw  group:rw  all:r;

If any rights for groups or all are assigned, then it is not necessary to assign rights for user:

proxy_store_access  group:rw  all:r;


proxy_temp_file_write_size

Syntax: proxy_temp_file_write_size size
Default: 8k|16k
Context: http
server
location
Reference:proxy_temp_file_write_size


Sets the amount of data that will be flushed to the proxy_temp_path when writing. It may be used to prevent a worker process blocking for too long while spooling data.

proxy_temp_path

Syntax: proxy_temp_path path [ level1 [ level2 [ level3 ]]]
Default: proxy_temp
Context: http
server
location
Reference:proxy_temp_path


This directive works like client_body_temp_path to specify a location to buffer large proxied requests to the filesystem.

proxy_upstream_fail_timeout

deprecated: 0.5.0 -- Please use the fail_timeout parameter of server directive from the upstream module.

proxy_upstream_max_fails

deprecated: 0.5.0 -- Please use the max_fails parameter of server directive from the upstream module.

Variables

In module ngx_http_proxy_module there are some built-in variables, which can be used for the creation of headers with the help of the proxy_set_header directive:

$proxy_add_x_forwarded_for

Contains client request-header "X-Forwarded-For" with separated by comma $remote_addr. If there is no X-Forwarded-For request-header, than $proxy_add_x_forwarded_for is equal to $remote_addr.

$proxy_host

The name and port of the upstream server that handled the request.

$proxy_internal_body_length

Length of the proxy request body set by proxy_set_body.

$proxy_port

The port of the upstream server that handled the request.

References

Original Documentation

Using Apache and Nginx together with the Proxy Module