Local Experimentation with Envoy

Tony Allen
3 min readJul 14, 2020

I recently helped some colleagues debug an issue with the Envoy fault filter. They remarked that I have a powerful local testing setup, so I thought it would be worthwhile to share it with others who would like to do some local experimentation on Envoy. One can do most of the things in this article via Envoy integration tests, but I find this method faster for quick little tests that I want to run and the only way to run non-trivial amounts of traffic through an Envoy process locally.

The goal here is to spin up an Envoy process on your local Linux (or maybe Mac) machine, send requests through it, and see what it does via print statements and stats (or gdb if you’d prefer). I’ll show off my HTTP/1 workflow for simplicity’s sake. It consists of just 3 parts: an Envoy process, a client, and a server.

An Envoy Process

This is pretty straightforward. One can build a debug Envoy binary with symbols via the following:

bazel build -c dbg //source/exe:envoy-static

Just note that you can look up fancier build options in the developer quick start guide found in the source tree.

The binary will be located in bazel-bin/source/exe/envoy-staticrelative to the top of the source tree. I like to copy it somewhere so it doesn’t get blown away by subsequent builds.

I keep a generic Envoy configuration file lying around that is some variation on the following:

static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
http_filters:
- name: envoy.router
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.router.v2.Router"
start_child_span: true
clusters:
- name: local_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: LEAST_REQUEST
load_assignment:
cluster_name: local_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9002
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081

That base config will:

  • Listen on port 9001. You send requests there.
  • Route everything to your local host on port 9002.
  • Allow for admin stuff on port 8081. This is where you’d query stats and do lots of other useful things.

The greatest hits already exist in that base config, but you can add anything else you find useful on top of it. Think more network/HTTP filters, more endpoints (so you can load balance across multiple “servers”), circuit breaking, outlier detection, etc.

So now you can spin up an Envoy process by feeding it the config file. To keep things simpler and more predictable, I also like to spin up a single worker thread and just let the logs go to stderr by default:

./envoy -c envoy_config.yaml --concurrency 1

Don’t forget to restart the process after each config file change!

A Local HTTP Server and Client

I use different tools depending on my goals. To run reproducible load tests I use bufferbloater, but when I just want to send single requests with specific payloads I use the Python SimpleHttpServer and curl.

For example, to send a request with specific headers through an Envoy with the basic config we saw earlier, I’d spin up a local HTTP server via:

python3 -m http.server 9002

and curl requests with a command that looks like:

curl -i -H "x-some-header: some_value" \
-H "x-another-header: another_value" \
localhost:9001

This would cause a request to be sent from curl to the Envoy process listening on port 9001. The request would be routed to the HTTP server listening on port 9002 and it would reply through the Envoy process.

An Envoy process (top), a Python HTTP server (middle), and curl (bottom).

That’s all, folks!

--

--