Connection pooling¶
For HTTP traffic, Envoy supports abstract connection pools that are layered on top of the underlying wire protocol (HTTP/1.1, HTTP/2, HTTP/3). The utilizing filter code does not need to be aware of whether the underlying protocol supports true multiplexing or not. In practice the underlying implementations have the following high level properties:
HTTP/1.1¶
The HTTP/1.1 connection pool acquires connections as needed to an upstream host (up to the circuit breaking limit). Requests are bound to connections as they become available, either because a connection is done processing a previous request or because a new connection is ready to receive its first request. The HTTP/1.1 connection pool does not make use of pipelining so that only a single downstream request must be reset if the upstream connection is severed.
HTTP/2¶
The HTTP/2 connection pool multiplexes multiple requests over a single connection, up to the limits imposed by max concurrent streams and max requests per connection. The HTTP/2 connection pool establishes as many connections as are needed to serve requests. With no limits, this will be only a single connection. If a GOAWAY frame is received or if the connection reaches the maximum requests per connection limit, the connection pool will drain the affected connection. Once a connection reaches its maximum concurrent stream limit, it will be marked as busy until a stream is available. New connections are established anytime there is a pending request without a connection that can be dispatched to (up to circuit breaker limits for connections). HTTP/2 is the preferred communication protocol when Envoy is operating as a reverse proxy, as connections rarely, if ever, get severed.
HTTP/3¶
The HTTP/3 connection pool multiplexes multiple requests over a single connection, up to the limits imposed by max concurrent streams and max requests per connection. The HTTP/3 connection pool establishes as many connections as are needed to serve requests. With no limits, this will be only a single connection. If a GOAWAY frame is received or if the connection reaches the maximum requests per connection limit, the connection pool will drain the affected connection. Once a connection reaches its maximum concurrent stream limit, it will be marked as busy until a stream is available. New connections are established anytime there is a pending request without a connection that can be dispatched to (up to circuit breaker limits for connections).
Automatic protocol selection¶
For Envoy acting as a forward proxy, the preferred configuration is the AutoHttpConfig , configued via http_protocol_options. By default it will use TCP and ALPN to select the best available protocol of HTTP/2 and HTTP/1.1.
If HTTP/3 is configured in the automatic pool it will currently attempt an QUIC connection first, then 300ms later, if a QUIC connection is not established, will also attempt to establish a TCP connection. Whichever handshake succeeds will be used for the initial stream, but if both TCP and QUIC connections are established, QUIC will eventually be preferred.
If an alternate protocol cache is configured via alternate_protocols_cache_options then HTTP/3 connections will only be attempted to servers which advertise HTTP/3 support either via HTTP Alternative Services, (eventually the HTTPS DNS resource record or “QUIC hints” which will be manually configured). If no such advertisement exists, then HTTP/2 or HTTP/1 will be used instead.
If no alternate protocol cache is configured, then HTTP/3 connections will be attempted to all servers, even those which do not advertise HTTP/3.
Further, HTTP/3 runs over QUIC (which uses UDP) and not over TCP (which HTTP/1 and HTTP/2 use). It is not uncommon for network devices to block UDP traffic, and hence block HTTP/3. This means that upstream HTTP/3 connection attempts might be blocked by the network and will fall back to using HTTP/2 or HTTP/1. This path is alpha and rapidly undergoing improvements with the goal of having the default behavior result in optimal latency for internet environments, so please be patient and follow along with Envoy release notes to stay aprised of the latest and greatest changes.
Happy Eyeballs Support¶
Envoy supports Happy Eyeballs, RFC6555,
for upstream TCP connections. This behavior is now on by default but can be disabled by the runtime flag
envoy.reloadable_features.allow_multiple_dns_addresses
. For clusters which use
LOGICAL_DNS,
this behavior is configured by setting the DNS IP address resolution policy in
config.cluster.v3.Cluster.DnsLookupFamily
to the ALL option to return
both IPv4 and IPv6 addresses. The returned addresses will be sorted according the the Happy Eyeballs
specification and a connection will be attempted to the first in the list. If this connection succeeds,
it will be used. If it fails, an attempt will be made to the next on the list. If after 300ms the connection
is still connecting, then a backup connection attempt will be made to the next address on the list.
Eventually an attempt will succeed to one of the addresses in which case that connection will be used, or else all attempts will fail in which case a connection error will be reported.
Number of connection pools¶
Each host in each cluster will have one or more connection pools. If the cluster has a single explicit protocol configured, then the host may have only a single connection pool. However, if the cluster supports multiple upstream protocols, then unless it is using ALPN, one connection pool per protocol may be allocated. Separate connection pools are also allocated for each of the following features:
Each worker thread maintains its own connection pools for each cluster, so if an Envoy has two threads and a cluster with both HTTP/1 and HTTP/2 support, there will be at least 4 connection pools.
Health checking interactions¶
If Envoy is configured for either active or passive health checking, all connection pool connections will be closed on behalf of a host that transitions from an available state to an unavailable state. If the host reenters the load balancing rotation it will create fresh connections which will maximize the chance of working around a bad flow (due to ECMP route or something else).